[jira] [Created] (HDFS-13883) Reduce memory consumption and GC of directory scan

2018-08-30 Thread liaoyuxiangqin (JIRA)
liaoyuxiangqin created HDFS-13883:
-

 Summary: Reduce memory consumption and GC of directory scan
 Key: HDFS-13883
 URL: https://issues.apache.org/jira/browse/HDFS-13883
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.2.0
Reporter: liaoyuxiangqin


    When DirectoryScan task have trigger in periodic,  the scan thread to scan 
all disk in this 

DataNode for all blockpool, and construct a ScanInfo per block. So DataNode 
need huge memory to hold those ScanInfo's memory structure when tens of 
millions blocks store in this DataNode.

    Another problem is DataNode implements by java, so DataNode run as a JVM, 
so we need to set a big number for -Xmx to satisfy the memory needs of 
DirectoryScan. But we know the default period of DirectoryScan is 6 hours, and 
at other time DataNode actually need less memory, and JVM can't auto return 
free memory to OS, so many memory utilization rate is low. 

    At last, we have test  close or open DirectoryScan in thirty-millions 
blocks store in the DataNode, and the  -Xmx set is 16G and 32G respectively.So 
i think we can improve the 

DirectoryScan process and save memory such as scan one block pool per period, 
thanks.

   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-385) Optimize pipeline creation by sending reinitialization to all the node in parallel

2018-08-30 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-385:
--

 Summary: Optimize pipeline creation by sending reinitialization to 
all the node in parallel
 Key: HDDS-385
 URL: https://issues.apache.org/jira/browse/HDDS-385
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


Currently during pipeline creation, re initialization is send in a serially to 
multiple nodes. This can be optimized by sending multiple rationalizations in 
parallel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-386) Create a datanode cli

2018-08-30 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-386:


 Summary: Create a datanode cli
 Key: HDDS-386
 URL: https://issues.apache.org/jira/browse/HDDS-386
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Lokesh Jain
Assignee: Lokesh Jain
 Fix For: 0.2.1


For block deletion we need a debug cli on the datanode to know the state of the 
containers and number of chunks present in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13884) Improve the description of the setting dfs.image.compress

2018-08-30 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13884:


 Summary: Improve the description of the setting dfs.image.compress
 Key: HDFS-13884
 URL: https://issues.apache.org/jira/browse/HDFS-13884
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.1.1
Reporter: Yiqun Lin
Assignee: Ryan Wu


In HDFS-1435, we introduced a new option to store fsimage compressed. And this 
can avoid that It consumes a lot of network bandwidth when SBN do uploads a new 
fsimage to ANN. When lots of network bandwidth was consumed, it will affect ANN 
to deal with normal RPC requests or sync edit logs.

This is a very useful setting when our fsimage file is very large. However 
current description of this setting is too simple, we can document this more.
{noformat}

  dfs.image.compress
  false
  Should the dfs image be compressed?
  

{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Jenkins build machines down

2018-08-30 Thread Steve Loughran

for anyone who has noticed that the links off JIRA for build status being 
broken, and new patches aren't being reviewed, the Jenkins server has lost a 
disk after a raid controller failure

https://status.apache.org/incidents/4zl6mkyrg8qt
https://twitter.com/infrabot/status/1034918770421026816

Assume it'll be back by Friday and don't hit the "submit-patch" button until 
then, as it will probably only be ignored. Oh, and there'll inevitably be a 
build backlog too...

-Steve




[jira] [Created] (HDFS-13885) Improve debugging experience of dfsclient decrypts

2018-08-30 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HDFS-13885:
---

 Summary: Improve debugging experience of dfsclient decrypts
 Key: HDFS-13885
 URL: https://issues.apache.org/jira/browse/HDFS-13885
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


We want to know from the hdfs client log (e.g. hbase RS logs) for each 
CryptoOutputstream, approximately when does the decrypt happen and when does 
the file read happen, to help us rule out or identify hdfs NN / kms / DN being 
slow.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-08-30 Thread Sunil G
Hi All,

Inline with earlier communication dated 17th July 2018, I would like to
provide some updates.

We are approaching previously proposed code freeze date (Aug 31).

One of the critical feature Node Attributes feature merge discussion/vote
is ongoing. Also few other Blocker bugs need a bit more time. With regard
to this, suggesting to push the feature/code freeze for 2 more weeks to
accommodate these jiras too.

Proposing Updated changes in plan inline with this:
Feature freeze date : all features to merge by September 7, 2018.
Code freeze date : blockers/critical only, no improvements and
 blocker/critical bug-fixes September 14, 2018.
Release date: September 28, 2018

If any features in branch which are targeted to 3.2.0, please reply to this
email thread.

*Here's an updated 3.2.0 feature status:*

1. Merged & Completed features:

- (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
Initial cut.
- (Uma) HDFS-10285: HDFS Storage Policy Satisfier
- (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
- (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
and CLI.

2. Features close to finish:

- (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
Ongoing.
- (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
Patch in progress.
- (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
- (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage. In
progress.

3. Tentative features:

- (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
done before Aug 2018.
- (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
as more discussions are on-going.

*Summary of 3.2.0 issues status:*

26 Blocker and Critical issues [1] are open, I am following up with owners
to get status on each of them to get in by Code Freeze date.

[1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
BY priority DESC

Thanks,
Sunil

On Tue, Aug 14, 2018 at 10:30 PM Sunil G  wrote:

> Hi All,
>
> Thanks for the feedbacks. Inline with earlier communication dated 17th
> July 2018, I would like to provide some updates.
>
> We are approaching previously proposed feature freeze date (Aug 21, about
> 7 days from today).
> If any features in branch which are targeted to 3.2.0, please reply to
> this email thread.
> Steve has mentioned about the s3 features which will come close to Code
> Freeze Date (Aug 31st).
>
> *Here's an updated 3.2.0 feature status:*
>
> 1. Merged & Completed features:
>
> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
> Initial cut.
> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
>
> 2. Features close to finish:
>
> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Major patches
> are all in, only one last
> patch is in review state.
> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
> Close to commit.
> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
> and CLI. 2 patches are pending
> which will be closed by Feature freeze date.
> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
> Patch in progress.
> - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
> - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage.
> In progress.
>
> 3. Tentative features:
>
> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
> done before Aug 2018.
> - (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
> as more discussions are on-going.
>
> *Summary of 3.2.0 issues status:*
>
> 39 Blocker and Critical issues [1] are open, I am checking with owners to
> get status on each of them to get in by Code Freeze date.
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
> BY priority DESC
>
> Thanks,
> Sunil
>
> On Fri, Jul 20, 2018 at 8:03 AM Sunil G  wrote:
>
>> Thanks Subru for the thoughts.
>> One of the main reason for a major release is to push out critical
>> features with a faster cadence to the users. If we are pulling more and
>> more different types of features to a minor release, that branch will
>> become more destabilized and it may be tough to say that 3.1.2 is stable
>> that 3.1.1 for eg. We always tend to improve and stabilize features in
>> subsequent minor release.
>> For few companies, it makes sense to push out these new features faster
>> to make a reach to the users. Adding to the point to the backporting
>> issues, I agree that its a pain and we can workaround that with some git
>> scripts. If we can make such scripts available to committers, backport will
>> be seem-less across branches and we can achieve the faster release cadence
>> also.
>>
>> Thoughts?
>>
>> - Sunil
>>
>>

[jira] [Reopened] (HDFS-13830) Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting snasphottable directory list

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reopened HDFS-13830:
---

> Backport HDFS-13141 to branch-3.0: WebHDFS: Add support for getting 
> snasphottable directory list
> 
>
> Key: HDFS-13830
> URL: https://issues.apache.org/jira/browse/HDFS-13830
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-13830.branch-3.0.001.patch, 
> HDFS-13830.branch-3.0.002.patch, HDFS-13830.branch-3.0.003.patch, 
> HDFS-13830.branch-3.0.004.patch
>
>
> HDFS-13141 conflicts with 3.0.3 because of interface change in HdfsFileStatus.
> This Jira aims to backport the WebHDFS getSnapshottableDirListing() support 
> to branch-3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13838) WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" status

2018-08-30 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reopened HDFS-13838:
---

> WebHdfsFileSystem.getFileStatus() won't return correct "snapshot enabled" 
> status
> 
>
> Key: HDFS-13838
> URL: https://issues.apache.org/jira/browse/HDFS-13838
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, webhdfs
>Affects Versions: 3.1.0, 3.0.3
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: HDFS-13838.001.patch
>
>
> "Snapshot enabled" status has been added in HDFS-12455 by [~ajaykumar].
> However, it is found by [~jojochuang] that WebHdfsFileSystem.getFileStatus() 
> won't return the correct "snapshot enabled" status. The reason is that 
> JsonUtilClient.toFileStatus() did not check and append the "snapshot enabled" 
> flag to the resulting HdfsFileStatus object.
> Proof:
> In TestWebHDFS#testWebHdfsAllowandDisallowSnapshots(), add the following 
> lines indicated by prepending "+":
>  
> {code:java}
> // allow snapshots on /bar using webhdfs
> webHdfs.allowSnapshot(bar);
> +// check if snapshot status is enabled
> +assertTrue(dfs.getFileStatus(bar).isSnapshotEnabled());
> +assertTrue(webHdfs.getFileStatus(bar).isSnapshotEnabled());{code}
>  
> The first assertion will pass, as expected, while the second assertion will 
> fail because of the reason above.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13886) HttpFSFileSystem.getFileStatus() doesn't return "snapshot enabled" bit

2018-08-30 Thread Siyao Meng (JIRA)
Siyao Meng created HDFS-13886:
-

 Summary: HttpFSFileSystem.getFileStatus() doesn't return "snapshot 
enabled" bit
 Key: HDFS-13886
 URL: https://issues.apache.org/jira/browse/HDFS-13886
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Affects Versions: 3.0.3, 3.1.1
Reporter: Siyao Meng
Assignee: Siyao Meng


FSOperations.toJsonInner() doesn't check the "snapshot enabled" bit. Therefore, 
"fs.getFileStatus(path).isSnapshotEnabled()" will always return false for fs 
type HttpFSFileSystem/WebHdfsFileSystem/SWebhdfsFileSystem. Additional tests in 
BaseTestHttpFSWith will be added to prevent this from happening.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Hadoop 3.2 Release Plan proposal

2018-08-30 Thread Virajith Jalaparti
Hi Sunil,

Quick correction on the task list  (missed this earlier) -- HDFS-12615 is
being done by Inigo Goiri

-Virajith



On Thu, Aug 30, 2018 at 9:30 AM Sunil G  wrote:

> Hi All,
>
> Inline with earlier communication dated 17th July 2018, I would like to
> provide some updates.
>
> We are approaching previously proposed code freeze date (Aug 31).
>
> One of the critical feature Node Attributes feature merge discussion/vote
> is ongoing. Also few other Blocker bugs need a bit more time. With regard
> to this, suggesting to push the feature/code freeze for 2 more weeks to
> accommodate these jiras too.
>
> Proposing Updated changes in plan inline with this:
> Feature freeze date : all features to merge by September 7, 2018.
> Code freeze date : blockers/critical only, no improvements and
>  blocker/critical bug-fixes September 14, 2018.
> Release date: September 28, 2018
>
> If any features in branch which are targeted to 3.2.0, please reply to this
> email thread.
>
> *Here's an updated 3.2.0 feature status:*
>
> 1. Merged & Completed features:
>
> - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
> Initial cut.
> - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
> - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
> and CLI.
>
> 2. Features close to finish:
>
> - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Merge/Vote
> Ongoing.
> - (Rohith) YARN-5742: Serve aggregated logs of historical apps from ATSv2.
> Patch in progress.
> - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
> - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage. In
> progress.
>
> 3. Tentative features:
>
> - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to be
> done before Aug 2018.
> - (Eric) YARN-7129: Application Catalog for YARN applications. Challenging
> as more discussions are on-going.
>
> *Summary of 3.2.0 issues status:*
>
> 26 Blocker and Critical issues [1] are open, I am following up with owners
> to get status on each of them to get in by Code Freeze date.
>
> [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0 ORDER
> BY priority DESC
>
> Thanks,
> Sunil
>
> On Tue, Aug 14, 2018 at 10:30 PM Sunil G  wrote:
>
> > Hi All,
> >
> > Thanks for the feedbacks. Inline with earlier communication dated 17th
> > July 2018, I would like to provide some updates.
> >
> > We are approaching previously proposed feature freeze date (Aug 21, about
> > 7 days from today).
> > If any features in branch which are targeted to 3.2.0, please reply to
> > this email thread.
> > Steve has mentioned about the s3 features which will come close to Code
> > Freeze Date (Aug 31st).
> >
> > *Here's an updated 3.2.0 feature status:*
> >
> > 1. Merged & Completed features:
> >
> > - (Wangda) YARN-8561: Hadoop Submarine project for DeepLearning workloads
> > Initial cut.
> > - (Uma) HDFS-10285: HDFS Storage Policy Satisfier
> >
> > 2. Features close to finish:
> >
> > - (Naga/Sunil) YARN-3409: Node Attributes support in YARN. Major patches
> > are all in, only one last
> > patch is in review state.
> > - (Sunil) YARN-7494: Multi Node scheduling support in Capacity Scheduler.
> > Close to commit.
> > - (Chandni/Eric) YARN-7512: Support service upgrade via YARN Service API
> > and CLI. 2 patches are pending
> > which will be closed by Feature freeze date.
> > - (Rohith) YARN-5742: Serve aggregated logs of historical apps from
> ATSv2.
> > Patch in progress.
> > - (Virajit) HDFS-12615: Router-based HDFS federation. Improvement works.
> > - (Steve) S3Guard Phase III, S3a phase V, Support Windows Azure Storage.
> > In progress.
> >
> > 3. Tentative features:
> >
> > - (Haibo Chen) YARN-1011: Resource overcommitment. Looks challenging to
> be
> > done before Aug 2018.
> > - (Eric) YARN-7129: Application Catalog for YARN applications.
> Challenging
> > as more discussions are on-going.
> >
> > *Summary of 3.2.0 issues status:*
> >
> > 39 Blocker and Critical issues [1] are open, I am checking with owners to
> > get status on each of them to get in by Code Freeze date.
> >
> > [1] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
> > Critical) AND resolution = Unresolved AND "Target Version/s" = 3.2.0
> ORDER
> > BY priority DESC
> >
> > Thanks,
> > Sunil
> >
> > On Fri, Jul 20, 2018 at 8:03 AM Sunil G  wrote:
> >
> >> Thanks Subru for the thoughts.
> >> One of the main reason for a major release is to push out critical
> >> features with a faster cadence to the users. If we are pulling more and
> >> more different types of features to a minor release, that branch will
> >> become more destabilized and it may be tough to say that 3.1.2 is stable
> >> that 3.1.1 for eg. We always tend to improve and stabilize features in
> >> subsequent minor release.
> >> F

[jira] [Created] (HDFS-13887) Remove hadoop-ozone-filesystem dependency on hadoop-ozone-integration-test

2018-08-30 Thread Namit Maheshwari (JIRA)
Namit Maheshwari created HDFS-13887:
---

 Summary: Remove hadoop-ozone-filesystem dependency on 
hadoop-ozone-integration-test
 Key: HDFS-13887
 URL: https://issues.apache.org/jira/browse/HDFS-13887
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Namit Maheshwari


hadoop-ozone-filesystem has dependency on hadoop-ozone-integration-test

Ideally filesystem modules should not have dependency on test modules.

This will also have issues while developing Unit Tests and trying to 
instantiate OzoneFileSystem object inside hadoop-ozone-integration-test, as 
that will create a circular dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13888) RequestHedgingProxyProvider shows InterruptedException

2018-08-30 Thread JIRA
Íñigo Goiri created HDFS-13888:
--

 Summary: RequestHedgingProxyProvider shows InterruptedException
 Key: HDFS-13888
 URL: https://issues.apache.org/jira/browse/HDFS-13888
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri


RequestHedgingProxyProvider shows InterruptedException when running:
{code}
2018-08-30 23:52:48,883 WARN ipc.Client: interrupted waiting to send rpc 
request to server
java.lang.InterruptedException
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:1142)
at org.apache.hadoop.ipc.Client.call(Client.java:1395)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:900)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hdfs.server.namenode.ha.RequestHedgingProxyProvider$RequestHedgingInvocationHandler$1.call(RequestHedgingProxyProvider.java:135)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}

It looks like this is the case of the background request that is killed once 
the main one succeeds. We should not log the full stack trace for this and 
maybe just a debug log.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org