[jira] [Created] (HADOOP-14468) S3Guard: make short-circuit getFileStatus() configurable

2017-05-30 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14468:
-

 Summary: S3Guard: make short-circuit getFileStatus() configurable
 Key: HADOOP-14468
 URL: https://issues.apache.org/jira/browse/HADOOP-14468
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


Currently, when S3Guard is enabled, getFileStatus() will skip S3 if it gets a 
result from the MetadataStore (e.g. dynamodb) first.

I would like to add a new parameter 
{{fs.s3a.metadatastore.getfilestatus.authoritative}} which, when true, keeps 
the current behavior.  When false, S3AFileSystem will check both S3 and the 
MetadataStore.

I'm not sure yet if we want to have this behavior the same for all callers of 
getFileStatus(), or if we only want to check both S3 and MetadataStore for some 
internal callers such as open().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14467) S3Guard: Improve FNFE message when opening a stream

2017-05-30 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14467:
-

 Summary: S3Guard: Improve FNFE message when opening a stream
 Key: HADOOP-14467
 URL: https://issues.apache.org/jira/browse/HADOOP-14467
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Aaron Fabbri
Assignee: Aaron Fabbri


Following up on the [discussion on 
HADOOP-13345|https://issues.apache.org/jira/browse/HADOOP-13345?focusedCommentId=16030050=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16030050],
 because S3Guard can serve getFileStatus() from the MetadataStore without doing 
a HEAD on S3, a FileNotFound error on a file due to S3 GET inconsistency does 
not happen on open(), but on the first read of the stream.  We may add retries 
to the S3 client in the future, but for now we should have an exception message 
that indicates this may be due to inconsistency (assuming it isn't a more 
straightforward case like someone deleting the object out from under you).

This is expected to be a rare case, since the S3 service is now mostly 
consistent for GET.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/

[May 30, 2017 5:48:58 AM] (aajisaka) MAPREDUCE-6887. Modifier 'static' is 
redundant for inner enums.
[May 30, 2017 6:11:10 AM] (aajisaka) HADOOP-14458. Add missing imports to
[May 30, 2017 8:22:40 AM] (sunilg) YARN-6635. Refactor yarn-app pages in new 
YARN UI. Contributed by Akhil




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.server.namenode.TestNamenodeCapacityReport 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [1.1M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [76K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/330/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   

[jira] [Reopened] (HADOOP-12825) Log slow name resolutions

2017-05-30 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung reopened HADOOP-12825:


> Log slow name resolutions 
> --
>
> Key: HADOOP-12825
> URL: https://issues.apache.org/jira/browse/HADOOP-12825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0, 2.7.2, 3.0.0-alpha1
>
> Attachments: getByName-call-graph.txt, HADOOP-12825.001.patch, 
> HADOOP-12825.002.patch
>
>
> Logging slow name resolutions would be useful in identifying DNS performance 
> issues in a cluster. Most resolutions go through 
> {{org.apache.hadoop.security.SecurityUtil.getByName}} ( see attached call 
> graph ). Adding additional logging to this method would expose such issues.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Removal of maven eclipse plug-in support from Apache Yetus

2017-05-30 Thread Allen Wittenauer

This is just a heads up.

The Apache Yetus community is debating removing the maven eclipse 
plug-in testing support from precommit. (Given that Apache Hadoop is currently 
rigged up to always run Yetus' master for testing purposes, this means Hadoop 
will see the removal immediately post-commit.) The plug-in itself is deprecated 
and always throws warnings/errors during execution.  Additionally, Eclipse has 
added import support as part of Neon.  

If you feel strongly either way, feel free to hop onto YETUS-509.

Thanks.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-30 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/419/

[May 29, 2017 8:30:23 AM] (aajisaka) HDFS-11832. Switch leftover logs to slf4j 
format in BlockManager.java.
[May 30, 2017 5:48:58 AM] (aajisaka) MAPREDUCE-6887. Modifier 'static' is 
redundant for inner enums.
[May 30, 2017 6:11:10 AM] (aajisaka) HADOOP-14458. Add missing imports to




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
100] 
   Useless object stored in variable seqOs of method 

[jira] [Created] (HADOOP-14466) Remove useless document from TestAliyunOSSFileSystemContract.java

2017-05-30 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14466:
--

 Summary: Remove useless document from 
TestAliyunOSSFileSystemContract.java
 Key: HADOOP-14466
 URL: https://issues.apache.org/jira/browse/HADOOP-14466
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About 2.7.4 Release

2017-05-30 Thread Akira Ajisaka

Sure.
If you want to edit the wiki, please tell me your ASF confluence account.

-Akira

On 2017/05/30 15:31, Rohith Sharma K S wrote:

Couple of more JIRAs need to be back ported for 2.7.4 release. These will
solve RM HA unstability issues.
https://issues.apache.org/jira/browse/YARN-5333
https://issues.apache.org/jira/browse/YARN-5988
https://issues.apache.org/jira/browse/YARN-6304

I will raise a JIRAs to back port it.

@Akira , could  you help to add these JIRAs into wiki?

Thanks & Regards
Rohith Sharma K S

On 29 May 2017 at 12:19, Akira Ajisaka  wrote:


Created a page for 2.7.4 release.
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.7.4

If you want to edit this wiki, please ping me.

Regards,
Akira


On 2017/05/23 4:42, Brahma Reddy Battula wrote:


Hi Konstantin Shvachko


how about creating a wiki page for 2.7.4 release status like 2.8 and
trunk in following link.??


https://cwiki.apache.org/confluence/display/HADOOP



From: Konstantin Shvachko 
Sent: Saturday, May 13, 2017 3:58 AM
To: Akira Ajisaka
Cc: Hadoop Common; Hdfs-dev; mapreduce-...@hadoop.apache.org;
yarn-...@hadoop.apache.org
Subject: Re: About 2.7.4 Release

Latest update on the links and filters. Here is the correct link for the
filter:
https://issues.apache.org/jira/secure/IssueNavigator.jspa?
requestId=12340814

Also updated: https://s.apache.org/Dzg4

Had to do some Jira debugging. Sorry for confusion.

Thanks,
--Konstantin

On Wed, May 10, 2017 at 2:30 PM, Konstantin Shvachko <
shv.had...@gmail.com>
wrote:

Hey Akira,


I didn't have private filters. Most probably Jira caches something.
Your filter is in the right direction, but for some reason it lists only
22 issues, while mine has 29.
It misses e.g. YARN-5543 
.

Anyways, I created a Jira filter now "Hadoop 2.7.4 release blockers",
shared it with "everybody", and updated my link to point to that filter.
So
you can use any of the three methods below to get the correct list:
1. Go to https://s.apache.org/Dzg4
2. Go to the filter via
https://issues.apache.org/jira/issues?filter=12340814
   or by finding "Hadoop 2.7.4 release blockers" filter in the jira
3. On Advanced issues search page paste this:
project in (HDFS, HADOOP, YARN, MAPREDUCE) AND labels = release-blocker
AND "Target Version/s" = 2.7.4

Hope this solves the confusion for which issues are included.
Please LMK if it doesn't, as it is important.

Thanks,
--Konstantin

On Tue, May 9, 2017 at 9:58 AM, Akira Ajisaka 
wrote:

Hi Konstantin,


Thank you for volunteering as release manager!

Actually the original link works fine: https://s.apache.org/Dzg4



I couldn't see the link. Maybe is it private filter?

Here is a link I generated: https://s.apache.org/ehKy
This filter includes resolved issue and excludes fixversion == 2.7.4

Thanks and Regards,
Akira

On 2017/05/08 19:20, Konstantin Shvachko wrote:

Hi Brahma Reddy Battula,


Actually the original link works fine: https://s.apache.org/Dzg4
Your link excludes closed and resolved issues, which needs backporting,
and
which we cannot reopen, as discussed in this thread earlier.

Looked through the issues you proposed:

HDFS-9311 
Seems like a new feature. It helps failover to standby node when
primary
is
under heavy load, but it introduces new APIs, addresses, config
parameters.
And needs at least one follow up jira.
Looks like a backward compatible change, though.
Did you have a chance to run it in production?

+1 on
HDFS-10987 


[HDFS-10987] Make Decommission less expensive when lot of ...<

https://issues.apache.org/jira/browse/HDFS-10987>
issues.apache.org
When user want to decommission a node which having 50M blocks ,it could
hold the namesystem lock for long time.We've seen it is taking 36 sec. As
we knew during this ...



HDFS-9902 



[HDFS-9902] Support different values of dfs.datanode.du ...<

https://issues.apache.org/jira/browse/HDFS-9902>
issues.apache.org
Now Hadoop support different storage type for DISK, SSD, ARCHIVE and
RAM_DISK, but they share one configuration dfs.datanode.du.reserved. The
DISK size may be several ...



HDFS-8312 



Trash does not descent into child directories to check for ...<

https://issues.apache.org/jira/browse/HDFS-8312>
issues.apache.org
HDFS trash does not descent into child directory to check if user has
permission to delete files. For example: Run the following command to
initialize directory ...



HADOOP-14100 



Upgrade Jsch jar to latest version to fix vulnerability in ...<

https://issues.apache.org/jira/browse/HADOOP-14100>
issues.apache.org
Recently there was on vulnerability reported on jsch library. Its 

Re: Reminder to always set x.0.0 and x.y.0 fix versions when backporting

2017-05-30 Thread Akira Ajisaka

Thanks Andrew for the reminder!

I've checked the commit log and set 2.9.0 to the issue if the patch is 
committed in branch-2.


-Akira

On 2017/05/08 6:21, Andrew Wang wrote:

Hi folks,

I've noticed with the backporting efforts for 2.8.1, we're losing some
x.y.0 fix versions (e.g. 2.9.0). Our fix version scheme is described here
(also quoted)

https://hadoop.apache.org/versioning.html

   1. For each *minor* release line, set the *lowest unreleased a.b.c
   version, where c ≥ 0*.
   2. For each *major* release line, set the *lowest unreleased a.b.0
   version*.

This JIRA query for instance turns up 44 JIRAs with fix versions 2.8.1 or
2.8.2 and not 2.9.0:

https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20and%20fixVersion%20in%20(%222.8.1%22%2C%20%222.8.2%22)%20and%20fixVersion%20!%3D%20%222.9.0%22

Best,
Andrew



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About 2.7.4 Release

2017-05-30 Thread Rohith Sharma K S
Couple of more JIRAs need to be back ported for 2.7.4 release. These will
solve RM HA unstability issues.
https://issues.apache.org/jira/browse/YARN-5333
https://issues.apache.org/jira/browse/YARN-5988
https://issues.apache.org/jira/browse/YARN-6304

I will raise a JIRAs to back port it.

@Akira , could  you help to add these JIRAs into wiki?

Thanks & Regards
Rohith Sharma K S

On 29 May 2017 at 12:19, Akira Ajisaka  wrote:

> Created a page for 2.7.4 release.
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.7.4
>
> If you want to edit this wiki, please ping me.
>
> Regards,
> Akira
>
>
> On 2017/05/23 4:42, Brahma Reddy Battula wrote:
>
>> Hi Konstantin Shvachko
>>
>>
>> how about creating a wiki page for 2.7.4 release status like 2.8 and
>> trunk in following link.??
>>
>>
>> https://cwiki.apache.org/confluence/display/HADOOP
>>
>>
>> 
>> From: Konstantin Shvachko 
>> Sent: Saturday, May 13, 2017 3:58 AM
>> To: Akira Ajisaka
>> Cc: Hadoop Common; Hdfs-dev; mapreduce-...@hadoop.apache.org;
>> yarn-...@hadoop.apache.org
>> Subject: Re: About 2.7.4 Release
>>
>> Latest update on the links and filters. Here is the correct link for the
>> filter:
>> https://issues.apache.org/jira/secure/IssueNavigator.jspa?
>> requestId=12340814
>>
>> Also updated: https://s.apache.org/Dzg4
>>
>> Had to do some Jira debugging. Sorry for confusion.
>>
>> Thanks,
>> --Konstantin
>>
>> On Wed, May 10, 2017 at 2:30 PM, Konstantin Shvachko <
>> shv.had...@gmail.com>
>> wrote:
>>
>> Hey Akira,
>>>
>>> I didn't have private filters. Most probably Jira caches something.
>>> Your filter is in the right direction, but for some reason it lists only
>>> 22 issues, while mine has 29.
>>> It misses e.g. YARN-5543 >> a/browse/YARN-5543>
>>> .
>>>
>>> Anyways, I created a Jira filter now "Hadoop 2.7.4 release blockers",
>>> shared it with "everybody", and updated my link to point to that filter.
>>> So
>>> you can use any of the three methods below to get the correct list:
>>> 1. Go to https://s.apache.org/Dzg4
>>> 2. Go to the filter via
>>> https://issues.apache.org/jira/issues?filter=12340814
>>>or by finding "Hadoop 2.7.4 release blockers" filter in the jira
>>> 3. On Advanced issues search page paste this:
>>> project in (HDFS, HADOOP, YARN, MAPREDUCE) AND labels = release-blocker
>>> AND "Target Version/s" = 2.7.4
>>>
>>> Hope this solves the confusion for which issues are included.
>>> Please LMK if it doesn't, as it is important.
>>>
>>> Thanks,
>>> --Konstantin
>>>
>>> On Tue, May 9, 2017 at 9:58 AM, Akira Ajisaka 
>>> wrote:
>>>
>>> Hi Konstantin,

 Thank you for volunteering as release manager!

 Actually the original link works fine: https://s.apache.org/Dzg4
>
 I couldn't see the link. Maybe is it private filter?

 Here is a link I generated: https://s.apache.org/ehKy
 This filter includes resolved issue and excludes fixversion == 2.7.4

 Thanks and Regards,
 Akira

 On 2017/05/08 19:20, Konstantin Shvachko wrote:

 Hi Brahma Reddy Battula,
>
> Actually the original link works fine: https://s.apache.org/Dzg4
> Your link excludes closed and resolved issues, which needs backporting,
> and
> which we cannot reopen, as discussed in this thread earlier.
>
> Looked through the issues you proposed:
>
> HDFS-9311 
> Seems like a new feature. It helps failover to standby node when
> primary
> is
> under heavy load, but it introduces new APIs, addresses, config
> parameters.
> And needs at least one follow up jira.
> Looks like a backward compatible change, though.
> Did you have a chance to run it in production?
>
> +1 on
> HDFS-10987 
>
 [HDFS-10987] Make Decommission less expensive when lot of ...<
>> https://issues.apache.org/jira/browse/HDFS-10987>
>> issues.apache.org
>> When user want to decommission a node which having 50M blocks ,it could
>> hold the namesystem lock for long time.We've seen it is taking 36 sec. As
>> we knew during this ...
>>
>>
>>
>> HDFS-9902 
>
 [HDFS-9902] Support different values of dfs.datanode.du ...<
>> https://issues.apache.org/jira/browse/HDFS-9902>
>> issues.apache.org
>> Now Hadoop support different storage type for DISK, SSD, ARCHIVE and
>> RAM_DISK, but they share one configuration dfs.datanode.du.reserved. The
>> DISK size may be several ...
>>
>>
>>
>> HDFS-8312 
>
 Trash does not descent into child directories to check for ...<
>> https://issues.apache.org/jira/browse/HDFS-8312>
>> issues.apache.org
>> HDFS trash does not descent into child directory to check if user has
>> permission to