[jira] [Commented] (HDFS-12693) Ozone: Enable XFrame options for KSM/SCM web ui

2017-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214125#comment-16214125
 ] 

Hadoop QA commented on HDFS-12693:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| Timed out junit tests | 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12693 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12893430/HDFS-12693-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9ff1c2750d28 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | 

[jira] [Comment Edited] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-10-21 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214120#comment-16214120
 ] 

Konstantin Shvachko edited comment on HDFS-12638 at 10/21/17 10:49 PM:
---

It looks to me that the main problem here is that {{unprotectedDelete()}} while 
collecting blocks in {{INodeFile.destroyAndCollectBlocks()}} invalidates block 
collection {{bcid}}, but leaves the block in the {{BlocksMap}}. Then 
{{FSNamesystem.delete()}} releases the lock and reacquires it again for actual 
block removal in {{FSNamesystem.removeBlocks()}}. So if {{ReplicationMonitor}} 
or {{NamenodeFsck}} kick in after the lock is released, but before the blocks 
are deleted from {{BlocksMap}} they can hit NPE accessing invalid (id = -1) 
INode.
Incremental block deletion was introduced in HDFS-6618, so all major versions 
should be affected.

For fixing this we should not invalidate {{bcid}} in 
{{NodeFile.destroyAndCollectBlocks()}}, but rather in 
{{BlockManager.removeBlockFromMap()}}, when the block is actually removed from 
{{BlocksMap}}.
I agree with [~daryn] we should fix the bug (invalid blocks in the map), rather 
than mitigate its consequences (NPE).


was (Author: shv):
It looks to me that the main problem here is that {{unprotectedDelete()}} while 
collecting blocks in {{INodeFile.destroyAndCollectBlocks()}} invalidates block 
collection {{bcid}}, but leaves the block in the {{BlocksMap}}. Then 
{{FSNamesystem.delete()}} releases the lock and reacquires it again for actual 
block removal in {{FSNamesystem.removeBlocks()}}. So if {{ReplicationMonitor}} 
or {{NamenodeFsck}} kick in after the lock is released, but before the blocks 
are deleted from {{BlocksMap}} they can hit NPE accessing invalid (id = -1) 
INode.
Incremental block deletion was introduced in HDFS-6618, so all major versions 
should be affected.

For fixing this we should not invalidate {{bcid}} in 
{{NodeFile.destroyAndCollectBlocks()}}, but rather in 
{{BlockManager.removeBlockFromMap()}}, when the block is actually removed from 
{{BlocksMap}}.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
> Attachments: HDFS-12638-branch-2.8.2.001.patch
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-10-21 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214120#comment-16214120
 ] 

Konstantin Shvachko commented on HDFS-12638:


It looks to me that the main problem here is that {{unprotectedDelete()}} while 
collecting blocks in {{INodeFile.destroyAndCollectBlocks()}} invalidates block 
collection {{bcid}}, but leaves the block in the {{BlocksMap}}. Then 
{{FSNamesystem.delete()}} releases the lock and reacquires it again for actual 
block removal in {{FSNamesystem.removeBlocks()}}. So if {{ReplicationMonitor}} 
or {{NamenodeFsck}} kick in after the lock is released, but before the blocks 
are deleted from {{BlocksMap}} they can hit NPE accessing invalid (id = -1) 
INode.
Incremental block deletion was introduced in HDFS-6618, so all major versions 
should be affected.

For fixing this we should not invalidate {{bcid}} in 
{{NodeFile.destroyAndCollectBlocks()}}, but rather in 
{{BlockManager.removeBlockFromMap()}}, when the block is actually removed from 
{{BlocksMap}}.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
> Attachments: HDFS-12638-branch-2.8.2.001.patch
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-10-21 Thread Ewan Higgs (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214114#comment-16214114
 ] 

Ewan Higgs commented on HDFS-12665:
---

[~virajith], thanks for the quick review.

{quote}Can you please add javadocs for all the new classes added?{quote}
Sure.

{quote}Is there a reason to refactor FileRegion and introduce 
ProvidedStorageLocation? Also, I think the name ProvidedStorageLocation is 
confusing given there is also a StorageLocation, which is something very 
different. May be rename to ProvidedLocation.{quote}
The AliasMap is a mapping between a block and a location in an external storage 
system so we need to break the FileRegion in two: the key and the value. The 
Block is the key in this mapping and {{FileRegion}} is a good name for the 
value being stored but it's taken by the entire KV entry itself.

{quote}The new AliasMap class has a confusing name. It is supposed to be an 
implementation of the AliasMapProtocol but the name is a prefix of the 
latter.{quote}
{{AliasMapProtocol}} is an interface (hence {{Protocol}} so the concrete 
version doesn't have the suffix. It should be more clear when there are javadoc 
comments attached to it.

{quote}Renaming LevelDBAliasMapClient to something along the lines 
InMemoryLevelDBAliasMap will make it a more descriptive name for the class. In 
general, adding a similar prefix to AliasMapProtocol, LevelDBAliasMapServer 
will improve the readability of the code.{quote}Agree. This will also help 
differentiate between the {{LevelDBFileRegionFormat}} (HDFS-12591).

{quote}Can we move LevelDBAliasMapClient to the 
org.apache.hadoop.hdfs.server.common.BlockAliasMapImpl package. {quote}
Sure. This means all the classes used by fs2img will be in the same package 
(unless they need dependencies like using DynamoDB, AzureTable, etc).

{quote}ITAliasMap only contains unit tests. I believe the convention is to 
start the name of the class with Test.{quote}I think this was part of how the 
code evolved. e.g. code using MiniDFSCluster was here but moved back to the 
unit tests since the HDFS project has a different sense of what differentiates 
unit and integration tests.

{quote}Why was the block pool id removed from FileRegion? It was used as a 
check in the DN so that only blocks belonging to the correct block pool id were 
reported to the NN.{quote}In an early version we refactored it to use 
{{ExtendedBlock}} as the key but were advised that it should remain {{Block}}. 
AIUI, the AliasMap is unique to a NN so there is no ambiguity.

{quote}Why rename getVolumeMap to fetchVolumeMap in 
ProvidedBlockPoolSlice?{quote} The return type is {{void}} so naming this 
{{getVolumeMap}} is misleading.
{quote}In startAliasMapServerIfNecessary, I think the aliasmap should be 
started only if provided is configured. i.e., check if 
DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED is set to true.{quote}That makes 
sense. If an administrator is running with 
{{DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED}} set to false but 
{{DFSConfigKeys.DFS_USE_ALIASMAP}} set to true, it's pretty lame. Should we 
throw or just log a warning about the misconfiguration.

{quote}Some of the changes have lead to lines crossing the 80 character limit. 
Can you please fix them?{quote}Sure. It seems to be convention to ignore that 
in {{DFSConfigKeys,java}} but I'll take a look at fixing this up elsewhere.

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-12693) Ozone: Enable XFrame options for KSM/SCM web ui

2017-10-21 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12693:

Attachment: HDFS-12693-HDFS-7240.001.patch

> Ozone: Enable XFrame options for KSM/SCM web ui
> ---
>
> Key: HDFS-12693
> URL: https://issues.apache.org/jira/browse/HDFS-12693
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12693-HDFS-7240.001.patch
>
>
> According to the discussion about security checklist on dev list I started to 
> check the security features of the existing HttpServer2 and found that by 
> default the XFrame option headers are disabled. This patch enables it by 
> default for SCM/KSM server similar to the Namenode/Datanode webui. 
> (Note: Even if the only form on the SCM/KSM ui-s is the standard LogLevel 
> form, I think it's a good practice to enable it by default.)
> Test:
> Without the patch (clean build, SCM ui):
> {code}
>  curl -v localhost:9876/jmx -o /dev/null  
>   
>* TCP_NODELAY set
> * Connected to localhost (::1) port 9876 (#0)
> > GET /jmx HTTP/1.1
> > Host: localhost:9876
> > User-Agent: curl/7.55.1
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sat, 21 Oct 2017 19:54:43 GMT
> < Cache-Control: no-cache
> < Expires: Sat, 21 Oct 2017 19:54:43 GMT
> < Date: Sat, 21 Oct 2017 19:54:43 GMT
> < Pragma: no-cache
> < Content-Type: application/json; charset=utf8
> < Access-Control-Allow-Methods: GET
> < Access-Control-Allow-Origin: *
> < Transfer-Encoding: chunked
> {code}
> With the patch:
> {code}
> curl -v localhost:9876/jmx -o /dev/null   
>   
> * Connected to localhost (::1) port 9876 (#0)
> > GET /jmx HTTP/1.1
> > Host: localhost:9876
> > User-Agent: curl/7.55.1
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sat, 21 Oct 2017 19:55:07 GMT
> < Cache-Control: no-cache
> < Expires: Sat, 21 Oct 2017 19:55:07 GMT
> < Date: Sat, 21 Oct 2017 19:55:07 GMT
> < Pragma: no-cache
> < Content-Type: application/json; charset=utf8
> < X-FRAME-OPTIONS: SAMEORIGIN
> < Access-Control-Allow-Methods: GET
> < Access-Control-Allow-Origin: *
> < Transfer-Encoding: chunked
> {code}
> Note: X-FRAME-OPTIONS header exists at the second case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12693) Ozone: Enable XFrame options for KSM/SCM web ui

2017-10-21 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12693:

Status: Patch Available  (was: Open)

> Ozone: Enable XFrame options for KSM/SCM web ui
> ---
>
> Key: HDFS-12693
> URL: https://issues.apache.org/jira/browse/HDFS-12693
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Elek, Marton
>Assignee: Elek, Marton
> Attachments: HDFS-12693-HDFS-7240.001.patch
>
>
> According to the discussion about security checklist on dev list I started to 
> check the security features of the existing HttpServer2 and found that by 
> default the XFrame option headers are disabled. This patch enables it by 
> default for SCM/KSM server similar to the Namenode/Datanode webui. 
> (Note: Even if the only form on the SCM/KSM ui-s is the standard LogLevel 
> form, I think it's a good practice to enable it by default.)
> Test:
> Without the patch (clean build, SCM ui):
> {code}
>  curl -v localhost:9876/jmx -o /dev/null  
>   
>* TCP_NODELAY set
> * Connected to localhost (::1) port 9876 (#0)
> > GET /jmx HTTP/1.1
> > Host: localhost:9876
> > User-Agent: curl/7.55.1
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sat, 21 Oct 2017 19:54:43 GMT
> < Cache-Control: no-cache
> < Expires: Sat, 21 Oct 2017 19:54:43 GMT
> < Date: Sat, 21 Oct 2017 19:54:43 GMT
> < Pragma: no-cache
> < Content-Type: application/json; charset=utf8
> < Access-Control-Allow-Methods: GET
> < Access-Control-Allow-Origin: *
> < Transfer-Encoding: chunked
> {code}
> With the patch:
> {code}
> curl -v localhost:9876/jmx -o /dev/null   
>   
> * Connected to localhost (::1) port 9876 (#0)
> > GET /jmx HTTP/1.1
> > Host: localhost:9876
> > User-Agent: curl/7.55.1
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Date: Sat, 21 Oct 2017 19:55:07 GMT
> < Cache-Control: no-cache
> < Expires: Sat, 21 Oct 2017 19:55:07 GMT
> < Date: Sat, 21 Oct 2017 19:55:07 GMT
> < Pragma: no-cache
> < Content-Type: application/json; charset=utf8
> < X-FRAME-OPTIONS: SAMEORIGIN
> < Access-Control-Allow-Methods: GET
> < Access-Control-Allow-Origin: *
> < Transfer-Encoding: chunked
> {code}
> Note: X-FRAME-OPTIONS header exists at the second case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12693) Ozone: Enable XFrame options for KSM/SCM web ui

2017-10-21 Thread Elek, Marton (JIRA)
Elek, Marton created HDFS-12693:
---

 Summary: Ozone: Enable XFrame options for KSM/SCM web ui
 Key: HDFS-12693
 URL: https://issues.apache.org/jira/browse/HDFS-12693
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Elek, Marton
Assignee: Elek, Marton


According to the discussion about security checklist on dev list I started to 
check the security features of the existing HttpServer2 and found that by 
default the XFrame option headers are disabled. This patch enables it by 
default for SCM/KSM server similar to the Namenode/Datanode webui. 

(Note: Even if the only form on the SCM/KSM ui-s is the standard LogLevel form, 
I think it's a good practice to enable it by default.)

Test:

Without the patch (clean build, SCM ui):

{code}
 curl -v localhost:9876/jmx -o /dev/null

   * TCP_NODELAY set
* Connected to localhost (::1) port 9876 (#0)
> GET /jmx HTTP/1.1
> Host: localhost:9876
> User-Agent: curl/7.55.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sat, 21 Oct 2017 19:54:43 GMT
< Cache-Control: no-cache
< Expires: Sat, 21 Oct 2017 19:54:43 GMT
< Date: Sat, 21 Oct 2017 19:54:43 GMT
< Pragma: no-cache
< Content-Type: application/json; charset=utf8
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Origin: *
< Transfer-Encoding: chunked
{code}

With the patch:
{code}
curl -v localhost:9876/jmx -o /dev/null 

* Connected to localhost (::1) port 9876 (#0)
> GET /jmx HTTP/1.1
> Host: localhost:9876
> User-Agent: curl/7.55.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Sat, 21 Oct 2017 19:55:07 GMT
< Cache-Control: no-cache
< Expires: Sat, 21 Oct 2017 19:55:07 GMT
< Date: Sat, 21 Oct 2017 19:55:07 GMT
< Pragma: no-cache
< Content-Type: application/json; charset=utf8
< X-FRAME-OPTIONS: SAMEORIGIN
< Access-Control-Allow-Methods: GET
< Access-Control-Allow-Origin: *
< Transfer-Encoding: chunked
{code}

Note: X-FRAME-OPTIONS header exists at the second case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-21 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214072#comment-16214072
 ] 

Bharat Viswanadham commented on HDFS-12683:
---

Test failures are not related to this patch.
Ran tests locally, they have passed.

> DFSZKFailOverController re-order logic for logging Exception
> 
>
> Key: HDFS-12683
> URL: https://issues.apache.org/jira/browse/HDFS-12683
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12683.01.patch, HDFS-12683.02.patch, 
> HDFS-12683.03.patch, HDFS-12683.04.patch, HDFS-12683.05.patch, 
> HDFS-12683.06.patch, HDFS-12683.07.patch, HDFS-12683.08.patch, 
> HDFS-12683.09.patch
>
>
> The ZKFC should log fatal exceptions before closing the connections and 
> terminating server.
> Occasionally we have seen DFSZKFailOver shutdown, but no exception or no 
> error being logged.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16213985#comment-16213985
 ] 

Hadoop QA commented on HDFS-12396:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} root: The patch generated 10 new + 275 unchanged 
- 2 fixed = 285 total (was 277) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 33s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ipc.TestRPC |
|   | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.TestMissingBlocksAlert |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:ca8ddc6 |
| JIRA Issue | HDFS-12396 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-12396) Webhdfs file system should get delegation token from kms provider.

2017-10-21 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12396:
--
Status: Patch Available  (was: Open)

> Webhdfs file system should get delegation token from kms provider.
> --
>
> Key: HDFS-12396
> URL: https://issues.apache.org/jira/browse/HDFS-12396
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12396.001.patch, HDFS-12396.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16213840#comment-16213840
 ] 

Hadoop QA commented on HDFS-12681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
57s{color} | {color:green} root generated 0 new + 1251 unchanged - 5 fixed = 
1251 total (was 1256) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 33 new + 629 unchanged 
- 12 fixed = 662 total (was 641) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  8m 
23s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
42s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-common-project_hadoop-common generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 19s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Null passed for non-null parameter of 
org.apache.hadoop.hdfs.protocol.HdfsFileStatus$Builder.symlink(byte[]) in 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convert(HdfsProtos$HdfsFileStatusProto)
  Method invoked at PBHelperClient.java:of 

[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-21 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: HDFS-12681.02.patch

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch, 
> HDFS-12681.02.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-21 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: HDFS-12681.02.patch

Checkstyle, findbugs

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-21 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12681:
-
Attachment: (was: HDFS-12681.02.patch)

> Fold HdfsLocatedFileStatus into HdfsFileStatus
> --
>
> Key: HDFS-12681
> URL: https://issues.apache.org/jira/browse/HDFS-12681
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HDFS-12681.00.patch, HDFS-12681.01.patch
>
>
> {{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
> {{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
> unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
> extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org