[jira] [Resolved] (HDFS-3854) Implement a fence method which should fence the BK shared storage.

2015-06-14 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R resolved HDFS-3854.

Resolution: Duplicate

> Implement a fence method which should fence the BK shared storage.
> --
>
> Key: HDFS-3854
> URL: https://issues.apache.org/jira/browse/HDFS-3854
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>
> Currently when machine down or network down, SSHFence can not ensure that, 
> other node is completely down. So, fence will fail and switch will not happen.
> [ internally we did work around to return true when machine is not reachable, 
> as BKJM already has fencing]
> It may be good idea to implement a fence method, which should ensure shared 
> storage fenced propertly and return true.
> We can plug in this new method in ZKFC fence methods.
> only pain points what I can see is, we may have to put the BKJM jar in ZKFC 
> lib for running this fence method.
> thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8603) in NN WEBUI,some point which show time is using the system local language.in Chinese env,will have Chinese in NN UI

2015-06-14 Thread huangyitian (JIRA)
huangyitian created HDFS-8603:
-

 Summary: in NN WEBUI,some point which show time is using the 
system local language.in Chinese env,will have Chinese in NN UI
 Key: HDFS-8603
 URL: https://issues.apache.org/jira/browse/HDFS-8603
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: huangyitian
Priority: Minor


in NN WebUI,in Chinese machine env,have some Chinese character showed for 
time.its using machine local language.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8530) Restore ECZone info inside FSImageLoader

2015-06-14 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki resolved HDFS-8530.
--
Resolution: Duplicate

> Restore ECZone info inside FSImageLoader
> 
>
> Key: HDFS-8530
> URL: https://issues.apache.org/jira/browse/HDFS-8530
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Fix For: HDFS-7285
>
>
> {{FSImageLoader}} provides file size API. This is used by  {{FSImageViewer}} 
> in order to see file status through HTTP. 
> Because cell size is only stored in ECZone now, it is necessary to restore 
> these information from fsimage file or NameNode when launching.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8602) Erasure Coding: Client can't read(decode) the EC files which have corrupt blocks.

2015-06-14 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-8602:
--

 Summary: Erasure Coding: Client can't read(decode) the EC files 
which have corrupt blocks.
 Key: HDFS-8602
 URL: https://issues.apache.org/jira/browse/HDFS-8602
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Takanobu Asanuma
 Fix For: HDFS-7285


Before the DataNode(s) reporting bad block(s), when Client reads the EC file 
which has bad blocks, Client gets hung up. And there are no error messages.
(When Client reads the replicated file which has bad blocks, the bad blocks are 
reconstructed at the same time, and Client can reads it.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: set up jenkins test for branch-2

2015-06-14 Thread Sean Busbey
pre-commit will already test on branch-2 provided you follow the patch
naming guidelines.

there is also a branch-2 specific jenkins job:
https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-branch2/

I'd suggest starting by looking at that job and filing jiras to address
whatever the failures are. May 14th was the last time is wasn't marked as
failed and that build was unstable, so there's probably a good deal of work.

On Sun, Jun 14, 2015 at 3:00 PM, Yongjun Zhang  wrote:

> Hi,
>
> We touched this topic before but it was put on hold. I'd like to bring it
> to our attention again.
>
> From time to time we saw changes that work fine in trunk but not branch-2,
> and we don't catch the issue in a timely manner. The difference between
> trunk and branch-2 is sufficient to justify periodic jenkins test and even
> pre-commit test for branch-2.
>
> I created https://issues.apache.org/jira/browse/INFRA-9226 earlier but I'm
> not sure who are the right folks to take care of it.
>
> Any one could help follow-up?
>
> Thanks a lot and best regards,
>
> --Yongjun
>



-- 
Sean


[jira] [Created] (HDFS-8601) TestWebHdfsFileSystemContract.testGetFileBlockLocations fails in branch-2.7

2015-06-14 Thread Jitendra Nath Pandey (JIRA)
Jitendra Nath Pandey created HDFS-8601:
--

 Summary: TestWebHdfsFileSystemContract.testGetFileBlockLocations 
fails in branch-2.7
 Key: HDFS-8601
 URL: https://issues.apache.org/jira/browse/HDFS-8601
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Affects Versions: 2.7.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey


The test testGetFileBlockLocations in TestWebHdfsFileSystemContract fails with 
following stack trace in branch-2.7

{code}
java.io.IOException: Response decoding failure: java.lang.ClassCastException: 
java.lang.Integer cannot be cast to java.lang.Long
at org.apache.hadoop.hdfs.web.JsonUtil.toDatanodeInfo(JsonUtil.java:340)
at 
org.apache.hadoop.hdfs.web.JsonUtil.toDatanodeInfoArray(JsonUtil.java:415)
at org.apache.hadoop.hdfs.web.JsonUtil.toLocatedBlock(JsonUtil.java:445)
at 
org.apache.hadoop.hdfs.web.JsonUtil.toLocatedBlockList(JsonUtil.java:484)
at 
org.apache.hadoop.hdfs.web.JsonUtil.toLocatedBlocks(JsonUtil.java:517)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$12.decodeResponse(WebHdfsFileSystem.java:1386)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$12.decodeResponse(WebHdfsFileSystem.java:1383)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$FsPathResponseRunner.getResponse(WebHdfsFileSystem.java:737)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:615)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:463)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:492)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:488)
at 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileBlockLocations(WebHdfsFileSystem.java:1382)
at 
org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testGetFileBlockLocations(TestWebHdfsFileSystemContract.java:136)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8600) TestWebHdfsFileSystemContract.testGetFileBlockLocations fails in branch-2.7

2015-06-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8600:
---

 Summary: TestWebHdfsFileSystemContract.testGetFileBlockLocations 
fails in branch-2.7
 Key: HDFS-8600
 URL: https://issues.apache.org/jira/browse/HDFS-8600
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The test fails with a cast error due to the following commit.

bq. 65bfde5 2015-03-03 wheat9@apa o HDFS-6565. Use jackson instead jetty json 
in hdfs-client. Contributed by Akira AJISAKA.

toJsonMap inserts xferPort as an integer.
{code}m.put("xferPort", datanodeinfo.getXferPort());{code}

Then toDatanodeInfo tries to cast the {{Integer}} object to a {{Long}}.
{code}tmpValue = m.get("xferPort");
int xferPort = (tmpValue == null) ? -1 : (int)(long)(Long)tmpValue;{code}

The test passes in trunk due to HDFS-8080. Since HDFS-8080 is too large to pull 
into 2.7.1 we'll fix the cast issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


set up jenkins test for branch-2

2015-06-14 Thread Yongjun Zhang
Hi,

We touched this topic before but it was put on hold. I'd like to bring it
to our attention again.

>From time to time we saw changes that work fine in trunk but not branch-2,
and we don't catch the issue in a timely manner. The difference between
trunk and branch-2 is sufficient to justify periodic jenkins test and even
pre-commit test for branch-2.

I created https://issues.apache.org/jira/browse/INFRA-9226 earlier but I'm
not sure who are the right folks to take care of it.

Any one could help follow-up?

Thanks a lot and best regards,

--Yongjun


[jira] [Resolved] (HDFS-8599) Skip tests that call DataNodeTestUtils.injectDataDirFailure on Windows

2015-06-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDFS-8599.
--
Resolution: Duplicate

> Skip tests that call DataNodeTestUtils.injectDataDirFailure on Windows
> --
>
> Key: HDFS-8599
> URL: https://issues.apache.org/jira/browse/HDFS-8599
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>
> This JIRA is open to fix tests that call 
> DataNodeTestUtils.injectDataDirFailure on Windows, which is currently not 
> supported. 
> The following two tests should be skipped when running on Windows.
> {code}
> TestDataNodeHotSwapVolumes.testDirectlyReloadAfterCheckDiskError
> TestDataNodeVolumeFailure.testFailedVolumeBeingRemovedFromDataNode
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8599) Fix TestDataNodeVolumeFailure.testFailedVolumeBeingRemovedFromDataNode on Windows

2015-06-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8599:


 Summary: Fix 
TestDataNodeVolumeFailure.testFailedVolumeBeingRemovedFromDataNode on Windows
 Key: HDFS-8599
 URL: https://issues.apache.org/jira/browse/HDFS-8599
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


The test calls DataNodeTestUtils.injectDataDirFailure which does not support 
windows. 

{code}
java.io.IOException: Failed to rename 
C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1
 to 
C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target\test\data\dfs\data\data1.origin.
at 
org.apache.hadoop.hdfs.server.datanode.DataNodeTestUtils.injectDataDirFailure(DataNodeTestUtils.java:181)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure.testFailedVolumeBeingRemovedFromDataNode(TestDataNodeVolumeFailure
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8598) Add BlockLocation for HdfsFileStatus and FileStatus

2015-06-14 Thread Yong Zhang (JIRA)
Yong Zhang created HDFS-8598:


 Summary: Add BlockLocation for HdfsFileStatus and FileStatus
 Key: HDFS-8598
 URL: https://issues.apache.org/jira/browse/HDFS-8598
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yong Zhang
Assignee: Yong Zhang


If we want to get all files block locations in one directory, we have to call 
getFileBlockLocations for each file, it will take long time because of too many 
request. 
Add BlockLocation for HdfsFileStatus and FileStatus can save much time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8597) Fix TestFSImage#testZeroBlockSize on Windows

2015-06-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8597:


 Summary: Fix TestFSImage#testZeroBlockSize on Windows
 Key: HDFS-8597
 URL: https://issues.apache.org/jira/browse/HDFS-8597
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


The last portion of the dfs.datanode.data.dir is incorrectly formatted.

{code}2015-06-14 09:44:37,133 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:startDataNodes(1413)) - Starting DataNode 0 with 
dfs.datanode.data.dir: 
file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
2015-06-14 09:44:37,141 ERROR common.Util (Util.java:stringAsURI(50)) - Syntax 
error in URI 
file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data.
 Please check hdfs configuration.
java.net.URISyntaxException: Illegal character in authority at index 7: 
file://C:\Users\xiaoyu\hadoop\trunk\hadoop\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 217 - Failure

2015-06-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/217/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7441 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [01:10 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:48 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.145 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-06-14T14:24:23+00:00
[INFO] Final Memory: 52M/160M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #216
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 810774 bytes
Compression is 0.0%
Took 12 sec
Recording test results
Updating HDFS-8593
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestPread.testHedgedReadLoopTooManyTimes

Error Message:
java.util.zip.ZipException: invalid stored block lengths

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid stored block 
lengths
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown 
Source)
at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
Source)
at 
org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2546)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2534)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2605)
at 
org.apache.hadoop.conf.Configurati

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #217

2015-06-14 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HDFS-8593. Calculation of effective layout version mishandles 
comparison to current layout version in storage. Contributed by Chris Nauroth.

--
[...truncated 7248 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.082 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.083 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.801 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.582 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.561 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.283 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.114 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.848 sec - in 
org.apache.hadoop.hdfs.TestRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 9.036 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.531 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.534 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.174 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.651 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.287 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.403 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apac

Re: Key Rotation in Data-at-Rest Encryption

2015-06-14 Thread Sitaraman Vilayannur
Hi Arun,
 FileEncryptionInfo has both a getKeyName and a getKeyVersionName.  What
distinguishes the concept of keyname and key version.
It appears to me that the keyname is closer to key alias than a key
version.  What is key version? Thanks much.
Sitaraman

On Sun, Jun 14, 2015 at 2:07 PM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:

> Hi Arun,
>  Thanks for your patience. I have a related question In my application i
> need to encrypt/decrypt files
> from the map reduce phase and i need to support key rotation.  Can i
> access the KMS from the map/reduce
> phase to retrieve the key material from the key alias which i retrieve
> from the FileEncryptionInfo class stored in the
>  extended attribute of the file(for decryption)?
>  Any pointers to how i can plug in various key providers such as java
> keystore would be appreciated.
> Is the toString method used to store the FileEncryptionInfo into the
> extended attribute if so how is the
> FileEncryptionInfo object retrieved back from the String.
> Thanks again for your help.
> Sitaraman
>
> On Sun, Jun 14, 2015 at 12:53 PM, Arun Suresh  wrote:
>
>> Apologize if I wasn't clear
>>
>> > Is the EZ key version same as an alias for the key?
>> yup
>>
>> > the EDEK along with the EZ key version is stored in the FIleInfo
>> FileInfo contains both EDEK and EZ key version. The FileInfo (you can look
>> at the *org.apache.hadoop.fs.FileEncryptionInfo* class for more info)
>> object is stored as the value of the extended attribute of that file.
>>
>> > How is the KeyMaterial derived from the KeyAlias and where is the
>> mapping between
>> the two stored? Is it in the KMS?
>> Yup. KMS extends the *org.apache.hadoop.crypto.key.KeyProvider* class. You
>> can take a look at it or a concrete implementation such as
>> JavaKeyStoreProvider for more information.
>>
>> Also, you should probably direct questions related to HDFS encryption to
>> hdfs-dev@hadoop.apache.org
>>
>> Cheers
>> -Arun
>>
>>
>>
>> On Sun, Jun 14, 2015 at 12:11 AM, Sitaraman Vilayannur <
>> vrsitaramanietfli...@gmail.com> wrote:
>>
>> > Hi Arun,
>> > Thanks for your response.
>> > Could you explain this a bit further for me
>> > Is the EZ key version same as an alias for the key?
>> > The EDEK is stored in the extended attributes of the file and the EZkey
>> > Version is stored
>> >  in the FileInfo  why is the EZKey Version not stored in the extended
>> > attributes too.
>> > Where is the FileInfo object persisted? Is it in the NameNode?
>> > How is the KeyMaterial derived from the KeyAlias and where is the
>> mapping
>> > between the two stored? Is it in the KMS?
>> > Thanks much for your help in this.
>> > Sitaraman
>> >
>> > On Sun, Jun 14, 2015 at 12:14 PM, Arun Suresh 
>> > wrote:
>> >
>> > > Hello Sitaraman,
>> > >
>> > > It is the EZ key "version" that is used to generate the EDEK (and
>> which
>> > is
>> > > ultimately stored in the encrypted file's extended attributes
>> > > '*raw.hdfs.crypto.encryption.info
>> > > *'), not really the the EZ
>> key
>> > > itself (which is stored in the directory's extended attribute ‘
>> > > *raw.hdfs.crypto.encryption.zone*’).
>> > >
>> > > Essentially, each file in a directory has a unique EDEK.. and an EDEK
>> is
>> > is
>> > > generated with the current version of the directory EZ key. The EDEK
>> > along
>> > > with the EZ key version is stored in the FIleInfo. While decrypting,
>> both
>> > > these are passed on to the KMS which provides the client with the DEK
>> > which
>> > > can be used to decrypt the file.
>> > >
>> > > Hope this clarifies things.
>> > >
>> > > Cheers
>> > > -Arun
>> > >
>> > > On Sat, Jun 13, 2015 at 9:51 PM, Sitaraman Vilayannur <
>> > > vrsitaramanietfli...@gmail.com> wrote:
>> > >
>> > > > HDFSDataatRestEncryption.pdf says the following about key
>> > > rotation..(please
>> > > > see appended below at the end of the mail)
>> > > > If the existing files do not have their EDEKs reencrypted using the
>> new
>> > > > ezkeyid, how would the existing files be decrypted? That is where is
>> > the
>> > > > mapping between files and its EZKey (for after key rotation
>> different
>> > > files
>> > > > have different EZKeys)ids stored and how is it retrieved?
>> > > > Thanks
>> > > > Sitaraman
>> > > >
>> > > > Key Rotation
>> > > > When the administrator causes a key rotation of the EZkey
>> > > > in the KMS, the encryption zone’s EZkey
>> > > > (stored in the encryption zone directory’s
>> > > raw.hdfs.crypto.encryption.zone
>> > > > extended attribute) gets the new keyid and version (only the version
>> > > > changes). Any new files
>> > > > created in the encryption zone have their DEKs encrypted using the
>> new
>> > > key
>> > > > version. Existing
>> > > > files do not have their EDEKs reencrypted using the new ezkeyid/
>> > > > version, but this will be considered as a future enhancement. Note
>> > that a
>> > > > key rotation only needs to causes a reencryption of the DEK

Re: Key Rotation in Data-at-Rest Encryption

2015-06-14 Thread Sitaraman Vilayannur
Hi Arun,
 Thanks for your patience. I have a related question In my application i
need to encrypt/decrypt files
from the map reduce phase and i need to support key rotation.  Can i access
the KMS from the map/reduce
phase to retrieve the key material from the key alias which i retrieve from
the FileEncryptionInfo class stored in the
 extended attribute of the file(for decryption)?
 Any pointers to how i can plug in various key providers such as java
keystore would be appreciated.
Is the toString method used to store the FileEncryptionInfo into the
extended attribute if so how is the
FileEncryptionInfo object retrieved back from the String.
Thanks again for your help.
Sitaraman

On Sun, Jun 14, 2015 at 12:53 PM, Arun Suresh  wrote:

> Apologize if I wasn't clear
>
> > Is the EZ key version same as an alias for the key?
> yup
>
> > the EDEK along with the EZ key version is stored in the FIleInfo
> FileInfo contains both EDEK and EZ key version. The FileInfo (you can look
> at the *org.apache.hadoop.fs.FileEncryptionInfo* class for more info)
> object is stored as the value of the extended attribute of that file.
>
> > How is the KeyMaterial derived from the KeyAlias and where is the
> mapping between
> the two stored? Is it in the KMS?
> Yup. KMS extends the *org.apache.hadoop.crypto.key.KeyProvider* class. You
> can take a look at it or a concrete implementation such as
> JavaKeyStoreProvider for more information.
>
> Also, you should probably direct questions related to HDFS encryption to
> hdfs-dev@hadoop.apache.org
>
> Cheers
> -Arun
>
>
>
> On Sun, Jun 14, 2015 at 12:11 AM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
> > Hi Arun,
> > Thanks for your response.
> > Could you explain this a bit further for me
> > Is the EZ key version same as an alias for the key?
> > The EDEK is stored in the extended attributes of the file and the EZkey
> > Version is stored
> >  in the FileInfo  why is the EZKey Version not stored in the extended
> > attributes too.
> > Where is the FileInfo object persisted? Is it in the NameNode?
> > How is the KeyMaterial derived from the KeyAlias and where is the mapping
> > between the two stored? Is it in the KMS?
> > Thanks much for your help in this.
> > Sitaraman
> >
> > On Sun, Jun 14, 2015 at 12:14 PM, Arun Suresh 
> > wrote:
> >
> > > Hello Sitaraman,
> > >
> > > It is the EZ key "version" that is used to generate the EDEK (and which
> > is
> > > ultimately stored in the encrypted file's extended attributes
> > > '*raw.hdfs.crypto.encryption.info
> > > *'), not really the the EZ key
> > > itself (which is stored in the directory's extended attribute ‘
> > > *raw.hdfs.crypto.encryption.zone*’).
> > >
> > > Essentially, each file in a directory has a unique EDEK.. and an EDEK
> is
> > is
> > > generated with the current version of the directory EZ key. The EDEK
> > along
> > > with the EZ key version is stored in the FIleInfo. While decrypting,
> both
> > > these are passed on to the KMS which provides the client with the DEK
> > which
> > > can be used to decrypt the file.
> > >
> > > Hope this clarifies things.
> > >
> > > Cheers
> > > -Arun
> > >
> > > On Sat, Jun 13, 2015 at 9:51 PM, Sitaraman Vilayannur <
> > > vrsitaramanietfli...@gmail.com> wrote:
> > >
> > > > HDFSDataatRestEncryption.pdf says the following about key
> > > rotation..(please
> > > > see appended below at the end of the mail)
> > > > If the existing files do not have their EDEKs reencrypted using the
> new
> > > > ezkeyid, how would the existing files be decrypted? That is where is
> > the
> > > > mapping between files and its EZKey (for after key rotation different
> > > files
> > > > have different EZKeys)ids stored and how is it retrieved?
> > > > Thanks
> > > > Sitaraman
> > > >
> > > > Key Rotation
> > > > When the administrator causes a key rotation of the EZkey
> > > > in the KMS, the encryption zone’s EZkey
> > > > (stored in the encryption zone directory’s
> > > raw.hdfs.crypto.encryption.zone
> > > > extended attribute) gets the new keyid and version (only the version
> > > > changes). Any new files
> > > > created in the encryption zone have their DEKs encrypted using the
> new
> > > key
> > > > version. Existing
> > > > files do not have their EDEKs reencrypted using the new ezkeyid/
> > > > version, but this will be considered as a future enhancement. Note
> > that a
> > > > key rotation only needs to causes a reencryption of the DEK, not a
> > > > reencryption of the underlying file.
> > > >
> > >
> >
>


[jira] [Resolved] (HDFS-8559) Erasure Coding: fix non-protobuf fsimage for striped blocks

2015-06-14 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu resolved HDFS-8559.
--
  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to branch. Thanks Jing for the contribution!

> Erasure Coding: fix non-protobuf fsimage for striped blocks
> ---
>
> Key: HDFS-8559
> URL: https://issues.apache.org/jira/browse/HDFS-8559
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-8559.000.patch
>
>
> For a legacy fsimage we currently always record its layoutversion as -51 so 
> that we can make sure it cannot be processed by a protobuf-based image 
> parser. Thus the following code in the parser always returns false and the 
> parse will fail.
> {code}
> NameNodeLayoutVersion.supports(
> NameNodeLayoutVersion.Feature.ERASURE_CODING, imgVersion)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Key Rotation in Data-at-Rest Encryption

2015-06-14 Thread Arun Suresh
Apologize if I wasn't clear

> Is the EZ key version same as an alias for the key?
yup

> the EDEK along with the EZ key version is stored in the FIleInfo
FileInfo contains both EDEK and EZ key version. The FileInfo (you can look
at the *org.apache.hadoop.fs.FileEncryptionInfo* class for more info)
object is stored as the value of the extended attribute of that file.

> How is the KeyMaterial derived from the KeyAlias and where is the mapping 
> between
the two stored? Is it in the KMS?
Yup. KMS extends the *org.apache.hadoop.crypto.key.KeyProvider* class. You
can take a look at it or a concrete implementation such as
JavaKeyStoreProvider for more information.

Also, you should probably direct questions related to HDFS encryption to
hdfs-dev@hadoop.apache.org

Cheers
-Arun



On Sun, Jun 14, 2015 at 12:11 AM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:

> Hi Arun,
> Thanks for your response.
> Could you explain this a bit further for me
> Is the EZ key version same as an alias for the key?
> The EDEK is stored in the extended attributes of the file and the EZkey
> Version is stored
>  in the FileInfo  why is the EZKey Version not stored in the extended
> attributes too.
> Where is the FileInfo object persisted? Is it in the NameNode?
> How is the KeyMaterial derived from the KeyAlias and where is the mapping
> between the two stored? Is it in the KMS?
> Thanks much for your help in this.
> Sitaraman
>
> On Sun, Jun 14, 2015 at 12:14 PM, Arun Suresh 
> wrote:
>
> > Hello Sitaraman,
> >
> > It is the EZ key "version" that is used to generate the EDEK (and which
> is
> > ultimately stored in the encrypted file's extended attributes
> > '*raw.hdfs.crypto.encryption.info
> > *'), not really the the EZ key
> > itself (which is stored in the directory's extended attribute ‘
> > *raw.hdfs.crypto.encryption.zone*’).
> >
> > Essentially, each file in a directory has a unique EDEK.. and an EDEK is
> is
> > generated with the current version of the directory EZ key. The EDEK
> along
> > with the EZ key version is stored in the FIleInfo. While decrypting, both
> > these are passed on to the KMS which provides the client with the DEK
> which
> > can be used to decrypt the file.
> >
> > Hope this clarifies things.
> >
> > Cheers
> > -Arun
> >
> > On Sat, Jun 13, 2015 at 9:51 PM, Sitaraman Vilayannur <
> > vrsitaramanietfli...@gmail.com> wrote:
> >
> > > HDFSDataatRestEncryption.pdf says the following about key
> > rotation..(please
> > > see appended below at the end of the mail)
> > > If the existing files do not have their EDEKs reencrypted using the new
> > > ezkeyid, how would the existing files be decrypted? That is where is
> the
> > > mapping between files and its EZKey (for after key rotation different
> > files
> > > have different EZKeys)ids stored and how is it retrieved?
> > > Thanks
> > > Sitaraman
> > >
> > > Key Rotation
> > > When the administrator causes a key rotation of the EZkey
> > > in the KMS, the encryption zone’s EZkey
> > > (stored in the encryption zone directory’s
> > raw.hdfs.crypto.encryption.zone
> > > extended attribute) gets the new keyid and version (only the version
> > > changes). Any new files
> > > created in the encryption zone have their DEKs encrypted using the new
> > key
> > > version. Existing
> > > files do not have their EDEKs reencrypted using the new ezkeyid/
> > > version, but this will be considered as a future enhancement. Note
> that a
> > > key rotation only needs to causes a reencryption of the DEK, not a
> > > reencryption of the underlying file.
> > >
> >
>