[jira] [Updated] (HDFS-3067) NPE in DFSInputStream.readBuffer if read is repeated on corrupted block

2012-03-11 Thread Henry Robinson (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Robinson updated HDFS-3067:
-

Attachment: HDFS-3067.1.patch

Patch addresses review comments. 

> NPE in DFSInputStream.readBuffer if read is repeated on corrupted block
> ---
>
> Key: HDFS-3067
> URL: https://issues.apache.org/jira/browse/HDFS-3067
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs client
>Affects Versions: 0.24.0
>Reporter: Henry Robinson
>Assignee: Henry Robinson
> Attachments: HDFS-3067.1.patch, HDFS-3607.patch
>
>
> With a singly-replicated block that's corrupted, issuing a read against it 
> twice in succession (e.g. if ChecksumException is caught by the client) gives 
> a NullPointerException.
> Here's the body of a test that reproduces the problem:
> {code}
> final short REPL_FACTOR = 1;
> final long FILE_LENGTH = 512L;
> cluster.waitActive();
> FileSystem fs = cluster.getFileSystem();
> Path path = new Path("/corrupted");
> DFSTestUtil.createFile(fs, path, FILE_LENGTH, REPL_FACTOR, 12345L);
> DFSTestUtil.waitReplication(fs, path, REPL_FACTOR);
> ExtendedBlock block = DFSTestUtil.getFirstBlock(fs, path);
> int blockFilesCorrupted = cluster.corruptBlockOnDataNodes(block);
> assertEquals("All replicas not corrupted", REPL_FACTOR, 
> blockFilesCorrupted);
> InetSocketAddress nnAddr =
> new InetSocketAddress("localhost", cluster.getNameNodePort());
> DFSClient client = new DFSClient(nnAddr, conf);
> DFSInputStream dis = client.open(path.toString());
> byte[] arr = new byte[(int)FILE_LENGTH];
> boolean sawException = false;
> try {
>   dis.read(arr, 0, (int)FILE_LENGTH);
> } catch (ChecksumException ex) { 
>   sawException = true;
> }
> 
> assertTrue(sawException);
> sawException = false;
> try {
>   dis.read(arr, 0, (int)FILE_LENGTH); // <-- NPE thrown here
> } catch (ChecksumException ex) { 
>   sawException = true;
> } 
> {code}
> The stack:
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:492)
>   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:545)
> [snip test stack]
> {code}
> and the problem is that currentNode is null. It's left at null after the 
> first read, which fails, and then is never refreshed because the condition in 
> read that protects blockSeekTo is only triggered if the current position is 
> outside the block's range. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3062) Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.

2012-03-11 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3062:
--

Target Version/s: 0.24.0, 0.23.3  (was: 0.23.3, 0.24.0)
Release Note:   (was: marking patch available so the bot runs)

> Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up 
> by job submission.
> 
>
> Key: HDFS-3062
> URL: https://issues.apache.org/jira/browse/HDFS-3062
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, security
>Affects Versions: 0.24.0
>Reporter: Mingjie Lai
>Assignee: Mingjie Lai
>Priority: Critical
> Attachments: HDFS-3062-trunk-2.patch, HDFS-3062-trunk.patch
>
>
> When testing the combination of NN HA + security + yarn, I found that the 
> mapred job submission cannot pick up the logic URI of a nameservice. 
> I have logic URI configured in core-site.xml
> {code}
> 
>  fs.defaultFS
>  hdfs://ns1
> 
> {code}
> HDFS client can work with the HA deployment/configs:
> {code}
> [root@nn1 hadoop]# hdfs dfs -ls /
> Found 6 items
> drwxr-xr-x   - hbase  hadoop  0 2012-03-07 20:42 /hbase
> drwxrwxrwx   - yarn   hadoop  0 2012-03-07 20:42 /logs
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mapred
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mr-history
> drwxrwxrwt   - hdfs   hadoop  0 2012-03-07 21:57 /tmp
> drwxr-xr-x   - hdfs   hadoop  0 2012-03-07 20:42 /user
> {code}
> but cannot submit a mapred job with security turned on
> {code}
> [root@nn1 hadoop]# /usr/lib/hadoop/bin/yarn --config ./conf jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar 
> randomwriter out
> Running 0 maps.
> Job started: Wed Mar 07 23:28:23 UTC 2012
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
>   at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:312)
>   at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:217)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:411)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:326)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> 
> {code}0.24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3062) Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.

2012-03-11 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3062:
--

   Fix Version/s: (was: 0.24.0)
Target Version/s: 0.24.0, 0.23.3
Release Note: marking patch available so the bot runs
  Status: Patch Available  (was: Open)

> Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up 
> by job submission.
> 
>
> Key: HDFS-3062
> URL: https://issues.apache.org/jira/browse/HDFS-3062
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, security
>Affects Versions: 0.24.0
>Reporter: Mingjie Lai
>Assignee: Mingjie Lai
>Priority: Critical
> Attachments: HDFS-3062-trunk-2.patch, HDFS-3062-trunk.patch
>
>
> When testing the combination of NN HA + security + yarn, I found that the 
> mapred job submission cannot pick up the logic URI of a nameservice. 
> I have logic URI configured in core-site.xml
> {code}
> 
>  fs.defaultFS
>  hdfs://ns1
> 
> {code}
> HDFS client can work with the HA deployment/configs:
> {code}
> [root@nn1 hadoop]# hdfs dfs -ls /
> Found 6 items
> drwxr-xr-x   - hbase  hadoop  0 2012-03-07 20:42 /hbase
> drwxrwxrwx   - yarn   hadoop  0 2012-03-07 20:42 /logs
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mapred
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mr-history
> drwxrwxrwt   - hdfs   hadoop  0 2012-03-07 21:57 /tmp
> drwxr-xr-x   - hdfs   hadoop  0 2012-03-07 20:42 /user
> {code}
> but cannot submit a mapred job with security turned on
> {code}
> [root@nn1 hadoop]# /usr/lib/hadoop/bin/yarn --config ./conf jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar 
> randomwriter out
> Running 0 maps.
> Job started: Wed Mar 07 23:28:23 UTC 2012
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
>   at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:312)
>   at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:217)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:411)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:326)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> 
> {code}0.24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1362) Provide volume management functionality for DataNode

2012-03-11 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227318#comment-13227318
 ] 

Uma Maheswara Rao G commented on HDFS-1362:
---

Thanks a lot Wang.

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Wang Xu
>Assignee: Wang Xu
> Fix For: 0.24.0
>
> Attachments: DataNode Volume Refreshment in HDFS-1362.pdf, 
> HDFS-1362.4_w7001.txt, HDFS-1362.5.patch, HDFS-1362.6.patch, 
> HDFS-1362.7.patch, HDFS-1362.8.patch, HDFS-1362.txt, 
> Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3062) Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.

2012-03-11 Thread Mingjie Lai (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingjie Lai updated HDFS-3062:
--

Attachment: HDFS-3062-trunk-2.patch

New patch addresses Todd's comments. 

> Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up 
> by job submission.
> 
>
> Key: HDFS-3062
> URL: https://issues.apache.org/jira/browse/HDFS-3062
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, security
>Affects Versions: 0.24.0
>Reporter: Mingjie Lai
>Assignee: Mingjie Lai
>Priority: Critical
> Fix For: 0.24.0
>
> Attachments: HDFS-3062-trunk-2.patch, HDFS-3062-trunk.patch
>
>
> When testing the combination of NN HA + security + yarn, I found that the 
> mapred job submission cannot pick up the logic URI of a nameservice. 
> I have logic URI configured in core-site.xml
> {code}
> 
>  fs.defaultFS
>  hdfs://ns1
> 
> {code}
> HDFS client can work with the HA deployment/configs:
> {code}
> [root@nn1 hadoop]# hdfs dfs -ls /
> Found 6 items
> drwxr-xr-x   - hbase  hadoop  0 2012-03-07 20:42 /hbase
> drwxrwxrwx   - yarn   hadoop  0 2012-03-07 20:42 /logs
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mapred
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mr-history
> drwxrwxrwt   - hdfs   hadoop  0 2012-03-07 21:57 /tmp
> drwxr-xr-x   - hdfs   hadoop  0 2012-03-07 20:42 /user
> {code}
> but cannot submit a mapred job with security turned on
> {code}
> [root@nn1 hadoop]# /usr/lib/hadoop/bin/yarn --config ./conf jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar 
> randomwriter out
> Running 0 maps.
> Job started: Wed Mar 07 23:28:23 UTC 2012
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
>   at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:312)
>   at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:217)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:411)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:326)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> 
> {code}0.24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2781) Add client protocol and DFSadmin for command to restore failed storage

2012-03-11 Thread Suresh Srinivas (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas reassigned HDFS-2781:
-

Assignee: Brandon Li  (was: Eli Collins)

> Add client protocol and DFSadmin for command to restore failed storage
> --
>
> Key: HDFS-2781
> URL: https://issues.apache.org/jira/browse/HDFS-2781
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs client, name-node
>Affects Versions: 0.24.0
>Reporter: Aaron T. Myers
>Assignee: Brandon Li
>
> Per HDFS-2769, it's important that an admin be able to ask the NN to try to 
> restore failed storage since we may drop into SM until the shared edits dir 
> is restored (w/o having to wait for the next checkpoint). There's currently 
> an API (and usage in DFSAdmin) to flip the flag indicating whether the NN 
> should try to restore failed storage but not that it should actually attempt 
> to do so. This jira is to add one. This is useful outside HA but doing as an 
> HDFS-1623 sub-task since it's motivated by HA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3062) Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.

2012-03-11 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227293#comment-13227293
 ] 

Todd Lipcon commented on HDFS-3062:
---

Thanks for the patch, Mingjie! A small formatting nit: can you change the // 
comments for the new test case to be a javadoc-style comment, and move the 
@Test annotation to be on the line just above the method? This way it will 
match the style used elsewhere.

> Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up 
> by job submission.
> 
>
> Key: HDFS-3062
> URL: https://issues.apache.org/jira/browse/HDFS-3062
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, security
>Affects Versions: 0.24.0
>Reporter: Mingjie Lai
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.24.0
>
> Attachments: HDFS-3062-trunk.patch
>
>
> When testing the combination of NN HA + security + yarn, I found that the 
> mapred job submission cannot pick up the logic URI of a nameservice. 
> I have logic URI configured in core-site.xml
> {code}
> 
>  fs.defaultFS
>  hdfs://ns1
> 
> {code}
> HDFS client can work with the HA deployment/configs:
> {code}
> [root@nn1 hadoop]# hdfs dfs -ls /
> Found 6 items
> drwxr-xr-x   - hbase  hadoop  0 2012-03-07 20:42 /hbase
> drwxrwxrwx   - yarn   hadoop  0 2012-03-07 20:42 /logs
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mapred
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mr-history
> drwxrwxrwt   - hdfs   hadoop  0 2012-03-07 21:57 /tmp
> drwxr-xr-x   - hdfs   hadoop  0 2012-03-07 20:42 /user
> {code}
> but cannot submit a mapred job with security turned on
> {code}
> [root@nn1 hadoop]# /usr/lib/hadoop/bin/yarn --config ./conf jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar 
> randomwriter out
> Running 0 maps.
> Job started: Wed Mar 07 23:28:23 UTC 2012
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
>   at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:312)
>   at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:217)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:411)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:326)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> 
> {code}0.24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3062) Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.

2012-03-11 Thread Todd Lipcon (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HDFS-3062:
-

Assignee: Mingjie Lai  (was: Todd Lipcon)

> Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up 
> by job submission.
> 
>
> Key: HDFS-3062
> URL: https://issues.apache.org/jira/browse/HDFS-3062
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, security
>Affects Versions: 0.24.0
>Reporter: Mingjie Lai
>Assignee: Mingjie Lai
>Priority: Critical
> Fix For: 0.24.0
>
> Attachments: HDFS-3062-trunk.patch
>
>
> When testing the combination of NN HA + security + yarn, I found that the 
> mapred job submission cannot pick up the logic URI of a nameservice. 
> I have logic URI configured in core-site.xml
> {code}
> 
>  fs.defaultFS
>  hdfs://ns1
> 
> {code}
> HDFS client can work with the HA deployment/configs:
> {code}
> [root@nn1 hadoop]# hdfs dfs -ls /
> Found 6 items
> drwxr-xr-x   - hbase  hadoop  0 2012-03-07 20:42 /hbase
> drwxrwxrwx   - yarn   hadoop  0 2012-03-07 20:42 /logs
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mapred
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mr-history
> drwxrwxrwt   - hdfs   hadoop  0 2012-03-07 21:57 /tmp
> drwxr-xr-x   - hdfs   hadoop  0 2012-03-07 20:42 /user
> {code}
> but cannot submit a mapred job with security turned on
> {code}
> [root@nn1 hadoop]# /usr/lib/hadoop/bin/yarn --config ./conf jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar 
> randomwriter out
> Running 0 maps.
> Job started: Wed Mar 07 23:28:23 UTC 2012
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
>   at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:312)
>   at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:217)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:411)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:326)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> 
> {code}0.24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1362) Provide volume management functionality for DataNode

2012-03-11 Thread Wang Xu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227290#comment-13227290
 ] 

Wang Xu commented on HDFS-1362:
---

Hi Uma,

I will try that in days.

> Provide volume management functionality for DataNode
> 
>
> Key: HDFS-1362
> URL: https://issues.apache.org/jira/browse/HDFS-1362
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: data-node
>Affects Versions: 0.23.0
>Reporter: Wang Xu
>Assignee: Wang Xu
> Fix For: 0.24.0
>
> Attachments: DataNode Volume Refreshment in HDFS-1362.pdf, 
> HDFS-1362.4_w7001.txt, HDFS-1362.5.patch, HDFS-1362.6.patch, 
> HDFS-1362.7.patch, HDFS-1362.8.patch, HDFS-1362.txt, 
> Provide_volume_management_for_DN_v1.pdf
>
>
> The current management unit in Hadoop is a node, i.e. if a node failed, it 
> will be kicked out and all the data on the node will be replicated.
> As almost all SATA controller support hotplug, we add a new command line 
> interface to datanode, thus it can list, add or remove a volume online, which 
> means we can change a disk without node decommission. Moreover, if the failed 
> disk still readable and the node has enouth space, it can migrate data on the 
> disks to other disks in the same node.
> A more detailed design document will be attached.
> The original version in our lab is implemented against 0.20 datanode 
> directly, and is it better to implemented it in contrib? Or any other 
> suggestion?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3062) Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.

2012-03-11 Thread Mingjie Lai (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingjie Lai updated HDFS-3062:
--

Attachment: HDFS-3062-trunk.patch

A patch for the issue: overriding getCanonicalServiceName() at 
DistributedFileSystem. 

After this patch gets applied, I can have a mapred job running with ha + 
security. 

> Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up 
> by job submission.
> 
>
> Key: HDFS-3062
> URL: https://issues.apache.org/jira/browse/HDFS-3062
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, security
>Affects Versions: 0.24.0
>Reporter: Mingjie Lai
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.24.0
>
> Attachments: HDFS-3062-trunk.patch
>
>
> When testing the combination of NN HA + security + yarn, I found that the 
> mapred job submission cannot pick up the logic URI of a nameservice. 
> I have logic URI configured in core-site.xml
> {code}
> 
>  fs.defaultFS
>  hdfs://ns1
> 
> {code}
> HDFS client can work with the HA deployment/configs:
> {code}
> [root@nn1 hadoop]# hdfs dfs -ls /
> Found 6 items
> drwxr-xr-x   - hbase  hadoop  0 2012-03-07 20:42 /hbase
> drwxrwxrwx   - yarn   hadoop  0 2012-03-07 20:42 /logs
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mapred
> drwxr-xr-x   - mapred hadoop  0 2012-03-07 20:42 /mr-history
> drwxrwxrwt   - hdfs   hadoop  0 2012-03-07 21:57 /tmp
> drwxr-xr-x   - hdfs   hadoop  0 2012-03-07 20:42 /user
> {code}
> but cannot submit a mapred job with security turned on
> {code}
> [root@nn1 hadoop]# /usr/lib/hadoop/bin/yarn --config ./conf jar 
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAPSHOT.jar 
> randomwriter out
> Running 0 maps.
> Job started: Wed Mar 07 23:28:23 UTC 2012
> java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
>   at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
>   at 
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:312)
>   at 
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:217)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:119)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:97)
>   at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:411)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:326)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1221)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
> 
> {code}0.24

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-3075) Add mechanism to restore the removed storage directories

2012-03-11 Thread Eli Collins (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HDFS-3075.
---

Resolution: Duplicate

This is a dupe of HDFS-2781. Brandon, feel free to post a patch there.

> Add mechanism to restore the removed storage directories
> 
>
> Key: HDFS-3075
> URL: https://issues.apache.org/jira/browse/HDFS-3075
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.24.0, 1.1.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> When a storage directory is inaccessible, namenode removes it from the valid 
> storage dir list to a removedStorageDirs list. Those storage directories will 
> not be restored when they become healthy again. 
> The proposed solution is to restore the previous failed directories at the 
> beginning of checkpointing, say, rollEdits, by copying necessary metadata 
> files from healthy directory to unhealthy ones. In this way, whenever a 
> failed storage directory is recovered by the administrator, he/she can 
> immediately force a checkpointing to restored a failed directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3076) need to remove unhealthy storage directories

2012-03-11 Thread Brandon Li (Created) (JIRA)
need to remove unhealthy storage directories


 Key: HDFS-3076
 URL: https://issues.apache.org/jira/browse/HDFS-3076
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: Brandon Li
Assignee: Brandon Li


When TransferFsImage can't access the storage directories to transfer the 
metadata files it throws IOException. It should also put these directories to 
the removedStorageDirs list so that it can be visible on namenode UI and also 
makes it possible to restore them later on.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3075) Add mechanism to restore the removed storage directories

2012-03-11 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227256#comment-13227256
 ] 

Uma Maheswara Rao G commented on HDFS-3075:
---

Hi Brandon,
It seems to me that you are looking for the same issue(HADOOP-4885) which is 
already addressed right?
Also we have the property to enable or disable that feature 
"dfs.namenode.name.dir.restore".
Are you talking about some other issue/improvement here?

> Add mechanism to restore the removed storage directories
> 
>
> Key: HDFS-3075
> URL: https://issues.apache.org/jira/browse/HDFS-3075
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: name-node
>Affects Versions: 0.24.0, 1.1.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>
> When a storage directory is inaccessible, namenode removes it from the valid 
> storage dir list to a removedStorageDirs list. Those storage directories will 
> not be restored when they become healthy again. 
> The proposed solution is to restore the previous failed directories at the 
> beginning of checkpointing, say, rollEdits, by copying necessary metadata 
> files from healthy directory to unhealthy ones. In this way, whenever a 
> failed storage directory is recovered by the administrator, he/she can 
> immediately force a checkpointing to restored a failed directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3075) Add mechanism to restore the removed storage directories

2012-03-11 Thread Brandon Li (Created) (JIRA)
Add mechanism to restore the removed storage directories


 Key: HDFS-3075
 URL: https://issues.apache.org/jira/browse/HDFS-3075
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.24.0, 1.1.0
Reporter: Brandon Li
Assignee: Brandon Li


When a storage directory is inaccessible, namenode removes it from the valid 
storage dir list to a removedStorageDirs list. Those storage directories will 
not be restored when they become healthy again. 

The proposed solution is to restore the previous failed directories at the 
beginning of checkpointing, say, rollEdits, by copying necessary metadata files 
from healthy directory to unhealthy ones. In this way, whenever a failed 
storage directory is recovered by the administrator, he/she can immediately 
force a checkpointing to restored a failed directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-2902) HA: Allow new (shared) edits log dir to be configured while NN is running

2012-03-11 Thread Brandon Li (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HDFS-2902:


Assignee: Brandon Li  (was: Bikas Saha)

> HA: Allow new (shared) edits log dir to be configured while NN is running
> -
>
> Key: HDFS-2902
> URL: https://issues.apache.org/jira/browse/HDFS-2902
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, name-node
>Affects Versions: 0.24.0
>Reporter: Bikas Saha
>Assignee: Brandon Li
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3074) HDFS ignores group of a user when creating a file or a directory, and instead inherits

2012-03-11 Thread Harsh J (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227217#comment-13227217
 ] 

Harsh J commented on HDFS-3074:
---

Gotcha. Thanks for resolving this!

> HDFS ignores group of a user when creating a file or a directory, and instead 
> inherits
> --
>
> Key: HDFS-3074
> URL: https://issues.apache.org/jira/browse/HDFS-3074
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.1
>Reporter: Harsh J
>Priority: Minor
>
> When creating a file or making a directory on HDFS, the namesystem calls pass 
> {{null}} for the group name, thereby having the parent directory permissions 
> inherited onto the file.
> This is not how the Linux FS works at least.
> For instance, if I have today a user 'foo' with default group 'foo', and I 
> have my HDFS home dir created as "foo:foo" by the HDFS admin, all files I 
> create under my directory too will have "foo" as group unless I chgrp them 
> myself. This makes sense.
> Now, if my admin were to change my local accounts' default/primary group to 
> 'bar' (but did not change so on my homedir on HDFS, and I were to continue 
> writing files to my home directory or any subdirectory that has 'foo' as 
> group, all files still get created with group 'foo' - as if the NN has not 
> realized the primary group of the mapped shell account has already changed.
> On linux this is the opposite. My login session's current primary group is 
> what determines the default group on my created files and directories, not 
> the parent dir owner.
> If the create and mkdirs call passed UGI's group info 
> (UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
> group?) along into their calls instead of a null in the PermissionsStatus 
> object, perhaps this can be avoided.
> Or should we leave this as-is, and instead state that if admins wish their 
> default groups of users to change, they'd have to chgrp all the directories 
> themselves?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1624) Append: The condition is incorrect for checking whether the last block is full

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227215#comment-13227215
 ] 

Hudson commented on HDFS-1624:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #679 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/679/])
HDFS-1624, HADOOP-7454 - Merge r1296540 from trunk to 0.23 to fix 
CHANGES.txt (Revision 1299415)

 Result = ABORTED
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299415
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Append: The condition is incorrect for checking whether the last block is full
> --
>
> Key: HDFS-1624
> URL: https://issues.apache.org/jira/browse/HDFS-1624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.21.0, 0.22.0, 0.23.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h1624_20110211.patch
>
>
> When the last block is full, the free space should be 0 but not equal to 
> block size.
> {code}
> //In DFSOutputStream.DataStreamer.DataStreamer(..),
>   if (freeInLastBlock == blockSize) {
> throw new IOException("The last block for file " + 
> src + " is full.");
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-3074) HDFS ignores group of a user when creating a file or a directory, and instead inherits

2012-03-11 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HDFS-3074.
---

Resolution: Won't Fix

For whatever strange historic reason, Hadoop uses BSD semantics instead of 
Linux semantics for group ownership at creation. I filed this "bug" once, too :)

> HDFS ignores group of a user when creating a file or a directory, and instead 
> inherits
> --
>
> Key: HDFS-3074
> URL: https://issues.apache.org/jira/browse/HDFS-3074
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: name-node
>Affects Versions: 0.23.1
>Reporter: Harsh J
>Priority: Minor
>
> When creating a file or making a directory on HDFS, the namesystem calls pass 
> {{null}} for the group name, thereby having the parent directory permissions 
> inherited onto the file.
> This is not how the Linux FS works at least.
> For instance, if I have today a user 'foo' with default group 'foo', and I 
> have my HDFS home dir created as "foo:foo" by the HDFS admin, all files I 
> create under my directory too will have "foo" as group unless I chgrp them 
> myself. This makes sense.
> Now, if my admin were to change my local accounts' default/primary group to 
> 'bar' (but did not change so on my homedir on HDFS, and I were to continue 
> writing files to my home directory or any subdirectory that has 'foo' as 
> group, all files still get created with group 'foo' - as if the NN has not 
> realized the primary group of the mapped shell account has already changed.
> On linux this is the opposite. My login session's current primary group is 
> what determines the default group on my created files and directories, not 
> the parent dir owner.
> If the create and mkdirs call passed UGI's group info 
> (UserGroupInformation.getCurrentUser().getGroupNames()[0] should give primary 
> group?) along into their calls instead of a null in the PermissionsStatus 
> object, perhaps this can be avoided.
> Or should we leave this as-is, and instead state that if admins wish their 
> default groups of users to change, they'd have to chgrp all the directories 
> themselves?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227200#comment-13227200
 ] 

Hudson commented on HDFS-1623:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1873 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1873/])
Moving HDFS-1623 and HADOOP-7454 to 0.23.3 section in CHANGES.txt files 
(Revision 1299417)

 Result = ABORTED
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299417
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> High Availability Framework for HDFS NN
> ---
>
> Key: HDFS-1623
> URL: https://issues.apache.org/jira/browse/HDFS-1623
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Sanjay Radia
> Fix For: 0.24.0, 0.23.3
>
> Attachments: HA-tests.pdf, HDFS-1623.rel23.patch, 
> HDFS-1623.trunk.patch, HDFS-High-Availability.pdf, NameNode HA_v2.pdf, 
> NameNode HA_v2_1.pdf, Namenode HA Framework.pdf, dfsio-results.tsv, 
> ha-testplan.pdf, ha-testplan.tex
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227196#comment-13227196
 ] 

Hudson commented on HDFS-1623:
--

Integrated in Hadoop-Mapreduce-0.23-Commit #678 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/678/])
HDFS-1623. Merging change r1296534 from trunk to 0.23 (Revision 1299412)

 Result = ABORTED
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299412
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/BadFencingConfigurationException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverController.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAAdmin.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocolHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthCheckFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ServiceFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/StreamPumper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolClientSideTranslatorPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolServerSideTranslatorPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/DefaultFailoverProxyProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/FailoverProxyProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicy.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtocolTranslator.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project

[jira] [Commented] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227182#comment-13227182
 ] 

Hudson commented on HDFS-1623:
--

Integrated in Hadoop-Hdfs-trunk-Commit #1939 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1939/])
Moving HDFS-1623 and HADOOP-7454 to 0.23.3 section in CHANGES.txt files 
(Revision 1299417)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299417
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> High Availability Framework for HDFS NN
> ---
>
> Key: HDFS-1623
> URL: https://issues.apache.org/jira/browse/HDFS-1623
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Sanjay Radia
> Fix For: 0.24.0, 0.23.3
>
> Attachments: HA-tests.pdf, HDFS-1623.rel23.patch, 
> HDFS-1623.trunk.patch, HDFS-High-Availability.pdf, NameNode HA_v2.pdf, 
> NameNode HA_v2_1.pdf, Namenode HA Framework.pdf, dfsio-results.tsv, 
> ha-testplan.pdf, ha-testplan.tex
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227184#comment-13227184
 ] 

Hudson commented on HDFS-1623:
--

Integrated in Hadoop-Common-trunk-Commit #1864 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1864/])
Moving HDFS-1623 and HADOOP-7454 to 0.23.3 section in CHANGES.txt files 
(Revision 1299417)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299417
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> High Availability Framework for HDFS NN
> ---
>
> Key: HDFS-1623
> URL: https://issues.apache.org/jira/browse/HDFS-1623
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Sanjay Radia
> Fix For: 0.24.0, 0.23.3
>
> Attachments: HA-tests.pdf, HDFS-1623.rel23.patch, 
> HDFS-1623.trunk.patch, HDFS-High-Availability.pdf, NameNode HA_v2.pdf, 
> NameNode HA_v2_1.pdf, Namenode HA Framework.pdf, dfsio-results.tsv, 
> ha-testplan.pdf, ha-testplan.tex
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1624) Append: The condition is incorrect for checking whether the last block is full

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227133#comment-13227133
 ] 

Hudson commented on HDFS-1624:
--

Integrated in Hadoop-Common-0.23-Commit #671 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/671/])
HDFS-1624, HADOOP-7454 - Merge r1296540 from trunk to 0.23 to fix 
CHANGES.txt (Revision 1299415)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299415
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Append: The condition is incorrect for checking whether the last block is full
> --
>
> Key: HDFS-1624
> URL: https://issues.apache.org/jira/browse/HDFS-1624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.21.0, 0.22.0, 0.23.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h1624_20110211.patch
>
>
> When the last block is full, the free space should be 0 but not equal to 
> block size.
> {code}
> //In DFSOutputStream.DataStreamer.DataStreamer(..),
>   if (freeInLastBlock == blockSize) {
> throw new IOException("The last block for file " + 
> src + " is full.");
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1624) Append: The condition is incorrect for checking whether the last block is full

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227128#comment-13227128
 ] 

Hudson commented on HDFS-1624:
--

Integrated in Hadoop-Hdfs-0.23-Commit #662 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/662/])
HDFS-1624, HADOOP-7454 - Merge r1296540 from trunk to 0.23 to fix 
CHANGES.txt (Revision 1299415)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299415
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Append: The condition is incorrect for checking whether the last block is full
> --
>
> Key: HDFS-1624
> URL: https://issues.apache.org/jira/browse/HDFS-1624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.21.0, 0.22.0, 0.23.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h1624_20110211.patch
>
>
> When the last block is full, the free space should be 0 but not equal to 
> block size.
> {code}
> //In DFSOutputStream.DataStreamer.DataStreamer(..),
>   if (freeInLastBlock == blockSize) {
> throw new IOException("The last block for file " + 
> src + " is full.");
>   }
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227123#comment-13227123
 ] 

Hudson commented on HDFS-1623:
--

Integrated in Hadoop-Common-0.23-Commit #670 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/670/])
HDFS-1623. Merging change r1296534 from trunk to 0.23 (Revision 1299412)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299412
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/BadFencingConfigurationException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverController.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAAdmin.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocolHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthCheckFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ServiceFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/StreamPumper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolClientSideTranslatorPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolServerSideTranslatorPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/DefaultFailoverProxyProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/FailoverProxyProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicy.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtocolTranslator.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoo

[jira] [Commented] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13227122#comment-13227122
 ] 

Hudson commented on HDFS-1623:
--

Integrated in Hadoop-Hdfs-0.23-Commit #661 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/661/])
HDFS-1623. Merging change r1296534 from trunk to 0.23 (Revision 1299412)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1299412
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.HDFS-1623.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/BadFencingConfigurationException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverController.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FailoverFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAAdmin.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocolHelper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthCheckFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/NodeFencer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ServiceFailedException.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ShellCommandFencer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/StreamPumper.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolClientSideTranslatorPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/protocolPB/HAServiceProtocolServerSideTranslatorPB.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/DefaultFailoverProxyProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/FailoverProxyProvider.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicy.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtocolTranslator.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-co

[jira] [Updated] (HDFS-1623) High Availability Framework for HDFS NN

2012-03-11 Thread Suresh Srinivas (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-1623:
--

Target Version/s: 0.24.0, 0.23.3  (was: 0.24.0)
   Fix Version/s: 0.23.3

> High Availability Framework for HDFS NN
> ---
>
> Key: HDFS-1623
> URL: https://issues.apache.org/jira/browse/HDFS-1623
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Sanjay Radia
> Fix For: 0.24.0, 0.23.3
>
> Attachments: HA-tests.pdf, HDFS-1623.rel23.patch, 
> HDFS-1623.trunk.patch, HDFS-High-Availability.pdf, NameNode HA_v2.pdf, 
> NameNode HA_v2_1.pdf, Namenode HA Framework.pdf, dfsio-results.tsv, 
> ha-testplan.pdf, ha-testplan.tex
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3005) ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)

2012-03-11 Thread VINAYAKUMAR B (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

VINAYAKUMAR B updated HDFS-3005:


Attachment: HDFS-3005.patch

Attaching the Latest Patch.

> ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)
> 
>
> Key: HDFS-3005
> URL: https://issues.apache.org/jira/browse/HDFS-3005
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.24.0
>Reporter: Tsz Wo (Nicholas), SZE
> Attachments: HDFS-3005.patch
>
>
> Saw this in [build 
> #1888|https://builds.apache.org/job/PreCommit-HDFS-Build/1888//testReport/org.apache.hadoop.hdfs.server.datanode/TestMulitipleNNDataBlockScanner/testBlockScannerAfterRestart/].
> {noformat}
> java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:834)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:832)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.getDfsUsed(FSDataset.java:557)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.getDfsUsed(FSDataset.java:809)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.access$1400(FSDataset.java:774)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getDfsUsed(FSDataset.java:1124)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.sendHeartBeat(BPOfferService.java:406)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.offerService(BPOfferService.java:490)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.run(BPOfferService.java:635)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3005) ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)

2012-03-11 Thread VINAYAKUMAR B (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

VINAYAKUMAR B updated HDFS-3005:


Attachment: (was: HDFS-3005.patch)

> ConcurrentModificationException in FSDataset$FSVolume.getDfsUsed(..)
> 
>
> Key: HDFS-3005
> URL: https://issues.apache.org/jira/browse/HDFS-3005
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: data-node
>Affects Versions: 0.24.0
>Reporter: Tsz Wo (Nicholas), SZE
>
> Saw this in [build 
> #1888|https://builds.apache.org/job/PreCommit-HDFS-Build/1888//testReport/org.apache.hadoop.hdfs.server.datanode/TestMulitipleNNDataBlockScanner/testBlockScannerAfterRestart/].
> {noformat}
> java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:834)
>   at java.util.HashMap$EntryIterator.next(HashMap.java:832)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.getDfsUsed(FSDataset.java:557)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.getDfsUsed(FSDataset.java:809)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolumeSet.access$1400(FSDataset.java:774)
>   at 
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getDfsUsed(FSDataset.java:1124)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.sendHeartBeat(BPOfferService.java:406)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.offerService(BPOfferService.java:490)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.run(BPOfferService.java:635)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira