[jira] [Commented] (HDFS-3004) Implement Recovery Mode

2012-03-28 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240215#comment-13240215
 ] 

Todd Lipcon commented on HDFS-3004:
---

{code}
+} catch (IOException e) {
+  return null;
{code}
This shows up a few places and makes me nervous. We should always at least 
LOG.warn an exception.


{code}
+if (op.getTransactionId()  expectedTxId) { 
+  askOperator(There appears to be a gap in the edit log.   +
+  We expected txid  + expectedTxId + , but got txid  +
+  op.getTransactionId() + ., recovery, skipping bad edit);
+} else if (op.getTransactionId()  expectedTxId) { 
+  askOperator(There appears to be an out-of-order edit in  +
+  the edit log.  We expected txid  + expectedTxId +
+  , but got txid  + op.getTransactionId() + ., recovery,
+  applying edits);
+  continue;
{code}
Am I reading this wrong, or is the logic backwards? It seems like the first 
prompt allows you to say continue, skipping bad edit, but doesn't call 
continue (thus applying the bad edit). The second one gives you the choice 
continue, applying edits but *does* skip the bad one.


{code}
+  catch (Throwable e) {
+askOperator(Failed to apply edit log operation  +
+  op.getTransactionIdStr() + : error  + e.getMessage(),
+  recovery, applying edits);
{code}
Here we no longer output the stack trace of the exception anywhere. That's 
often a big problem - we really need the exception trace to understand the root 
cause and to know whether we might be able to skip it.

- I would also recommend printing the op.toString() itself in all of these 
cases, not just the txids. That gives the operator something to base their 
decision on. I think we have a reasonable stringification of all ops nowdays.


{code}
-  LOG.info(String.format(Log begins at txid %d, but requested start 
-  + txid is %d. Skipping %d edits., elf.getFirstTxId(), fromTxId,
-  transactionsToSkip));
-  elfis.skipTransactions(transactionsToSkip);
+// TODO: add recovery mode override here as part of edit log failover
+if (elfis.skipUntil(fromTxId) == false) {
{code}
Please leave the log message in place here. Also, in the failure log message 
here, I think you should (a) throw EditLogInputException, and (b) include the 
file path here



{code}
+  /* Now that the operation has been successfully decoded and
+   * applied, update our bookkeeping. */
{code}
style: use // comments inline in code, not /* */ generally. There are a couple 
other places where you should make the same fix.


- There are still a few spurious whitespace changes unrelated to your code 
here. Not a big deal, but best to remove those hunks from your patch.



{code}
+   * Get the next valid operation from the stream storage
+   * 
+   * This is exactly like nextOp, except that we attempt to skip over damaged
+   * parts of the edit log
{code}
Please add '.'s at the end of the sentences in the docs. It's annoying, but the 
way JavaDoc gets formatted, all the sentences will run together without line 
breaks in many cases, so the punctuation's actually important for readability.


{code}
+   * @returnReturns true if we found a transaction ID greater than
+   *'txid' in the log.
+   */
+  public boolean skipUntil(long txid) throws IOException {
{code}
The javadoc should say greater than or equal to, right? Also, I think the doc 
should specify explicitly: the next call to readOp() will usually return the 
transaction with the specified txid, unless the log contains a gap, in which 
case it may return a higher one.


{code}
+  private ThreadLocal OpInstanceCache cache = new 
ThreadLocalOpInstanceCache() {
{code}
Style: no space between ThreadLocal and ''. Also the line's a bit long (we 
try to keep to 80 characters unless it's really hard not to)


{code}
+if (op == null)
   break;
{code}
Style: we use {} braces even for one-line if statements. There are a few other 
examples of this too.

{code}
+  }
+  catch (Throwable e) {
{code}
catch clause goes on same line as '}'


{code}
+if (txid == HdfsConstants.INVALID_TXID) {
+  throw new RuntimeException(operation has no txid);
{code}
Use {{Preconditions.checkState()}} here for better readability


{code}
+LOG.error(automatically choosing  + firstChoice);
{code}
this should probably be INFO or WARN


Style: please add a blank line between functions in RecoveryContext.java


TestNameNodeRecovery:
- do we need data nodes in these tests? if it's just metadata ops, 0 DNs should 
be fine
- move the EditLogTestSetup interface down lower in the file, just above where 
you 

[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240219#comment-13240219
 ] 

Aaron T. Myers commented on HDFS-3143:
--

I'm sure that this patch will work to get the test passing, but I'm not sure 
that it's the most appropriate solution. Is it not possible to fix HADOOP-8184 
so that the exception messages returned have the same data that they used to?

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240223#comment-13240223
 ] 

Todd Lipcon commented on HDFS-3143:
---

IMO it depends. The patch that caused this failure didn't make any note of 
semantic changes, so it seems a little strange to just change the test with no 
comment. But if this is in fact an improvement, it seems reasonable to fix the 
test. Could someone comment on what semantics actually changed? We used to 
prepend the exception class as part of the message of RemoteException, but now 
we don't?

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240226#comment-13240226
 ] 

Aaron T. Myers commented on HDFS-3143:
--

Great point, Todd. I'm also not entirely clear if the exception message change 
in HADOOP-8184 was an unintended side effect, or one of the intended effects of 
the JIRA.

There is perhaps a question of compatibility here as well, since HADOOP-8184 
changed the output of the `hadoop fs ...' CLI in the event of errors, as 
HDFS-3142 demonstrates. Should we be at all concerned about that? It doesn't 
seem like a big deal to me, but I'd like to hear others' opinions.

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Created) (JIRA)
TestDFSHAAdmin is failing post HADOOP-8202
--

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


TestDFSHAAdmin mocks a protocol object without implementing Closeable, which is 
now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240254#comment-13240254
 ] 

Todd Lipcon commented on HDFS-3156:
---

+1 pending hudson

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3156:
-

Attachment: HDFS-3156.patch

Here's a patch which addresses the issue.

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-3156:
--

Target Version/s: 0.24.0, 0.23.3  (was: 0.23.3, 0.24.0)
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240265#comment-13240265
 ] 

Hadoop QA commented on HDFS-3156:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520239/HDFS-3156.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed the unit tests build

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2108//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2108//console

This message is automatically generated.

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240278#comment-13240278
 ] 

Aaron T. Myers commented on HDFS-3156:
--

Hmmm, for some reason the patch failed to compile because MockitoUtil couldn't 
be found, even though HADOOP-8218 has already been committed. Perhaps the 
Jenkins slaves run against an svn mirror that hasn't been updated yet? I'll 
kick another Jenkins build tomorrow to give it another shot.

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2287) TestParallelRead has a small off-by-one bug

2012-03-28 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-2287:


Fix Version/s: (was: 0.24.0)

 TestParallelRead has a small off-by-one bug
 ---

 Key: HDFS-2287
 URL: https://issues.apache.org/jira/browse/HDFS-2287
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Trivial
 Fix For: 0.22.0

 Attachments: hdfs-2287-v-0.22.patch, hdfs-2287.txt


 Noticed this bug when I was running TestParallelRead - a simple off-by-one 
 error in some internal bounds checking.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3131) Improve TestStorageRestore

2012-03-28 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-3131:


Target Version/s: 0.24.0, 1.1.0  (was: 1.1.0, 0.24.0)
   Fix Version/s: (was: 0.24.0)

 Improve TestStorageRestore
 --

 Key: HDFS-3131
 URL: https://issues.apache.org/jira/browse/HDFS-3131
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 0.24.0, 1.1.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Brandon Li
Priority: Minor
  Labels: newbie
 Fix For: 1.1.0

 Attachments: HDFS-3131.branch-1.patch, HDFS-3131.patch, 
 HDFS-3131.patch


 Aaron has the following comments on TestStorageRestore in HDFS-3127.
 # removeStorageAccess, restoreAccess, and numStorageDirs can all be made 
 private
 # numStorageDirs can be made static
 # Rather than do set(Readable/Executable/Writable), use FileUtil.chmod(...).
 # Please put the contents of the test in a try/finally, with the calls to 
 shutdown the cluster and the 2NN in the finally block.
 # Some lines are over 80 chars.
 # No need for the numDatanodes variable - it's only used in one place.
 # Instead of xwr use rwx, which I think is a more common way of 
 describing permissions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3101) cannot read empty file using webhdfs

2012-03-28 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-3101:


Fix Version/s: (was: 0.23.3)
   2.0.0

 cannot read empty file using webhdfs
 

 Key: HDFS-3101
 URL: https://issues.apache.org/jira/browse/HDFS-3101
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Zhanwei.Wang
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.2, 1.0.2, 2.0.0

 Attachments: h3101_20120315.patch, h3101_20120315_branch-1.patch


 STEP:
 1, create a new EMPTY file
 2, read it using webhdfs.
 RESULT:
 expected: get a empty file
 I got: 
 {RemoteException:{exception:IOException,javaClassName:java.io.IOException,message:Offset=0
  out of the range [0, 0); OPEN, path=/testFile}}
 First of all, [0, 0) is not a valid range, and I think read a empty file 
 should be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3129) NetworkTopology: add test that getLeaf should check for invalid topologies

2012-03-28 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-3129:


Fix Version/s: (was: 0.23.3)
   2.0.0

 NetworkTopology: add test that getLeaf should check for invalid topologies
 --

 Key: HDFS-3129
 URL: https://issues.apache.org/jira/browse/HDFS-3129
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 1.1.0, 2.0.0

 Attachments: HDFS-3129-b1.001.patch, HDFS-3129.001.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-2978) The NameNode should expose name dir statuses via JMX

2012-03-28 Thread Arun C Murthy (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HDFS-2978:


Target Version/s: 1.0.2, 0.23.3  (was: 0.23.3, 1.0.2)
   Fix Version/s: (was: 0.23.3)
  2.0.0

 The NameNode should expose name dir statuses via JMX
 

 Key: HDFS-2978
 URL: https://issues.apache.org/jira/browse/HDFS-2978
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Affects Versions: 0.23.0, 1.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 1.0.2, 2.0.0

 Attachments: HDFS-2978-branch-1.patch, HDFS-2978.patch, 
 HDFS-2978.patch


 We currently display this info on the NN web UI, so users who wish to monitor 
 this must either do it manually or parse HTML. We should publish this 
 information via JMX.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1218) 20 append: Blocks recovered on startup should be treated with lower priority during block synchronization

2012-03-28 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240355#comment-13240355
 ] 

Uma Maheswara Rao G commented on HDFS-1218:
---

{code}
 if (!shouldRecoverRwrs  info.wasRecoveredOnStartup()) {
+  LOG.info(Not recovering replica  + record +  since it was 
recovered on 
+  + startup and we have better replicas);
+  continue;
+}{code}

@Todd, can you please tell why this check added?

Take a case,  

 1) DN1-DN2-DN3 are in pipeline.
 2) Client killed abruptly
 3) one DN has restarted , say DN3
 4) In DN3 info.wasRecoveredOnStartup() will be true
 5) NN recovery triggered, DN3 skipped from recovery due to above check.
 6) Now DN1, DN2 has blocks with generataion stamp 2 and DN3 has older 
generation stamp say 1 and also DN3 still has this block entry in ongoingCreates
 7) as part of recovery file has closed and got only two live replicas ( from 
DN1 and DN2)
 8) So, NN issued the command for replication. Now DN3 also has the replica 
with newer generation stamp.
 9) Now DN3 contains 2 replicas on disk. and one entry in ongoing creates with 
referring to blocksBeingWritten directory.
 
When we call append/ leaseRecovery, it may again skip this node for that 
recovery as blockId entry still presents in ongoingCreates with startup 
recovery true.
It may keep continue this dance for evry recovery.
And this stale replica will not be cleaned untill we restart the cluster. 
Actual replica will be trasferred to this node only through replication process.

Also unnecessarily that replicated blocks will get invalidated after next 
recoveries

I understood that check might be because to exclude the restarted node for 
calculating the min lengths to truncate.


 20 append: Blocks recovered on startup should be treated with lower priority 
 during block synchronization
 -

 Key: HDFS-1218
 URL: https://issues.apache.org/jira/browse/HDFS-1218
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20-append
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 0.20.205.0

 Attachments: HDFS-1218.20s.2.patch, hdfs-1281.txt


 When a datanode experiences power loss, it can come back up with truncated 
 replicas (due to local FS journal replay). Those replicas should not be 
 allowed to truncate the block during block synchronization if there are other 
 replicas from DNs that have _not_ restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3157) Error in deleting block is keep on coming from DN even after the block report and directory scanning has happened

2012-03-28 Thread J.Andreina (Created) (JIRA)
Error in deleting block is keep on coming from DN even after the block report 
and directory scanning has happened
-

 Key: HDFS-3157
 URL: https://issues.apache.org/jira/browse/HDFS-3157
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: J.Andreina
 Fix For: 0.24.0


Cluster setup:

1NN,Three DN(DN1,DN2,DN3),replication factor-2,dfs.blockreport.intervalMsec 
300,dfs.datanode.directoryscan.interval 1

step 1: write one file a.txt with sync(not closed)
step 2: Delete the blocks in one of the datanode say DN1(from rbw) to which 
replication happened.
step 3: close the file.

Since the replication factor is 2 the blocks are replicated to the other 
datanode.

Then at the NN side the following cmd is issued to DN from which the block is 
deleted
-
{noformat}
2012-03-19 13:41:36,905 INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
NameSystem.addToCorruptReplicasMap: duplicate requested for 
blk_2903555284838653156 to add as corrupt on XX.XX.XX.XX by /XX.XX.XX.XX 
because reported RBW replica with genstamp 1002 does not match COMPLETE block's 
genstamp in block map 1003
2012-03-19 13:41:39,588 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
Removing block blk_2903555284838653156_1003 from neededReplications as it has 
enough replicas.
{noformat}

From the datanode side in which the block is deleted the following exception 
occured


{noformat}
2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Unexpected error trying to delete block blk_2903555284838653156_1003. BlockInfo 
not found in volumeMap.
2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Error processing datanode Command
java.io.IOException: Error in deleting blocks.
at 
org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:2061)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:581)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:545)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:690)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:522)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:662)
at java.lang.Thread.run(Thread.java:619)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-1026) Quota checks fail for small files and quotas

2012-03-28 Thread Updated

 [ 
https://issues.apache.org/jira/browse/HDFS-1026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hızır Sefa İrken updated HDFS-1026:
---

Attachment: HDFS-1026.pacth

From what I read, I prepared and attached a patch.

 Quota checks fail for small files and quotas
 

 Key: HDFS-1026
 URL: https://issues.apache.org/jira/browse/HDFS-1026
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, name-node
Affects Versions: 0.20.1, 0.20.2, 0.20.3, 0.21.0, 0.22.0
Reporter: Eli Collins
  Labels: newbie
 Attachments: HDFS-1026.pacth


 If a directory has a quota less than blockSize * numReplicas then you can't 
 add a file to it, even if the file size is less than the quota. This is 
 because FSDirectory#addBlock updates the count assuming at least one block is 
 written in full. We don't know how much of the block will be written when 
 addBlock is called and supporting such small quotas is not important so 
 perhaps we should document this and log an error message instead of making 
 small (blockSize * numReplicas) quotas work.
 {code}
 // check quota limits and updated space consumed
 updateCount(inodes, inodes.length-1, 0, 
 fileINode.getPreferredBlockSize()*fileINode.getReplication(), true);
 {code}
 You can reproduce with the following commands:
 {code}
 $ dd if=/dev/zero of=temp bs=1000 count=64
 $ hadoop fs -mkdir /user/eli/dir
 $ hdfs dfsadmin -setSpaceQuota 191M /user/eli/dir
 $ hadoop fs -put temp /user/eli/dir  # Causes DSQuotaExceededException
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3157) Error in deleting block is keep on coming from DN even after the block report and directory scanning has happened

2012-03-28 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240476#comment-13240476
 ] 

Uma Maheswara Rao G commented on HDFS-3157:
---

Hi Andreina,

 It would be good if we keep the descrion field short and add as comments about 
further details. This can avoid generating the big emails for every update on 
this issue.


Thanks
Uma

 Error in deleting block is keep on coming from DN even after the block report 
 and directory scanning has happened
 -

 Key: HDFS-3157
 URL: https://issues.apache.org/jira/browse/HDFS-3157
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: J.Andreina
 Fix For: 0.24.0


 Cluster setup:
 1NN,Three DN(DN1,DN2,DN3),replication factor-2,dfs.blockreport.intervalMsec 
 300,dfs.datanode.directoryscan.interval 1
 step 1: write one file a.txt with sync(not closed)
 step 2: Delete the blocks in one of the datanode say DN1(from rbw) to which 
 replication happened.
 step 3: close the file.
 Since the replication factor is 2 the blocks are replicated to the other 
 datanode.
 Then at the NN side the following cmd is issued to DN from which the block is 
 deleted
 -
 {noformat}
 2012-03-19 13:41:36,905 INFO org.apache.hadoop.hdfs.StateChange: BLOCK 
 NameSystem.addToCorruptReplicasMap: duplicate requested for 
 blk_2903555284838653156 to add as corrupt on XX.XX.XX.XX by /XX.XX.XX.XX 
 because reported RBW replica with genstamp 1002 does not match COMPLETE 
 block's genstamp in block map 1003
 2012-03-19 13:41:39,588 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
 Removing block blk_2903555284838653156_1003 from neededReplications as it has 
 enough replicas.
 {noformat}
 From the datanode side in which the block is deleted the following exception 
 occured
 {noformat}
 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Unexpected error trying to delete block blk_2903555284838653156_1003. 
 BlockInfo not found in volumeMap.
 2012-02-29 13:54:13,126 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Error processing datanode Command
 java.io.IOException: Error in deleting blocks.
   at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:2061)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:581)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:545)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:690)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:522)
   at 
 org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:662)
   at java.lang.Thread.run(Thread.java:619)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-2839) HA: Remove need for client configuration of nameservice ID

2012-03-28 Thread Daryn Sharp (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240500#comment-13240500
 ] 

Daryn Sharp commented on HDFS-2839:
---

Comments.  They'll reference and span both solution A  B to hopefully provide 
clarity.

* NN has a logical name that is authority in URI (hdfs://logicalName/path)
* (Requirement B.5) The non-HA NN’s DNS name is the logical name of NN.

I think this may be your intent, so would you please clarify/confirm?  The 
logical name (URI's authority) for the NN should always be, for both HA and 
non-HA, a DNS name with a primary mapping to the main NN.  The logical name may 
be a CNAME (see below) in the case of HA.  This will allow both HA and non-HA 
aware clients to access the cluster.  The non-HA aware clients just won't 
failover. 

* The LogicalName is the DNS name that maps to a single IP which failsover
* DNS-Resolver - DNS-Name to IP or IPs (single IP in case of IP-Failover)

These two statements seem contradictory about whether the the logical name (URI 
authority) has a dns mapping to 1 or many hosts.  I'm assuming that the single 
IP approach would rely on config settings for failover support?

Here's how I would envision a dns configuration to support both HA and non-HA:
* A record for logical name: nn.domain - IP
** Non-HA client works as it does now.
** HA client has a single resolution so there's no failover.
* CNAME for logical name: nn.domain - nn1.domain  nn2.domain
** The non-HA client works as it does now.  The CNAME is resolved to the 
primary address (nn1.domain).  No code change required.
** The HA client can specifically query for all resolutions to build the 
failover list.

In fact, the HA aware client could be smart and instantiate the HA RPC proxy 
only if the logical name has multiple resolutions.  The HA aware client 
resolves the logical name with {{getAllByName}}, instead of {{getByName}}, to 
find the multiple mappings for HA.

Regarding cross cluster access:
* Cross cluster access must be supported
* Cross cluster – The logical to IP mapping must be available across clusters
* ConfigFile-resolver - the mapping in the config file - this config file will 
need to be be available in all clusters, for all clusters to allow cross 
cluster access.

I'm uneasy about propagating the current model where clients require a lot of 
config info about remote clusters.  It becomes a maintenance burden to keep 
them in sync, more so when some users have their configs.  Favoring the 
dns/resolver approach should minimize the need to sync all cluster configs for 
HA.

 HA: Remove need for client configuration of nameservice ID
 --

 Key: HDFS-2839
 URL: https://issues.apache.org/jira/browse/HDFS-2839
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: ha, hdfs client, name-node
Affects Versions: 0.24.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey

 The fully qualified path from an ha cluster, won't be usable from a different 
 cluster that doesn't know about that particular namespace id.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3134) harden edit log loader against malformed or malicious input

2012-03-28 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240565#comment-13240565
 ] 

Colin Patrick McCabe commented on HDFS-3134:


Hi Suresh,

I'm sorry if my description was unclear.  I am not talking about blindly 
translating unchecked exceptions into something else.  I'm talking about fixing 
the code so it doesn't generate those unchecked exceptions in the first place. 

Hope this helps.
Colin

 harden edit log loader against malformed or malicious input
 ---

 Key: HDFS-3134
 URL: https://issues.apache.org/jira/browse/HDFS-3134
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe

 Currently, the edit log loader does not handle bad or malicious input 
 sensibly.
 We can often cause OutOfMemory exceptions, null pointer exceptions, or other 
 unchecked exceptions to be thrown by feeding the edit log loader bad input.  
 In some environments, an out of memory error can cause the JVM process to be 
 terminated.
 It's clear that we want these exceptions to be thrown as IOException instead 
 of as unchecked exceptions.  We also want to avoid out of memory situations.
 The main task here is to put a sensible upper limit on the lengths of arrays 
 and strings we allocate on command.  The other task is to try to avoid 
 creating unchecked exceptions (by dereferencing potentially-NULL pointers, 
 for example).  Instead, we should verify ahead of time and give a more 
 sensible error message that reflects the problem with the input.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3004) Implement Recovery Mode

2012-03-28 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240576#comment-13240576
 ] 

Colin Patrick McCabe commented on HDFS-3004:


Todd said:

bq. Am I reading this wrong, or is the logic backwards? It seems like the first 
prompt allows you to say continue, skipping bad edit, but doesn't call 
continue (thus applying the bad edit). The second one gives you the choice 
continue, applying edits but does skip the bad one.

The logic isn't backwards, but the prompts possibly could be phrased better.

Basically, we NEVER want to apply a transaction that has a lower or equal ID to 
the previous one.  That's why the ''continue'' is there in the else clause.  We 
will try to recover from an edit log with gaps in it, though.  (That is sort of 
the point of recovery).

I'll see if I can phrase the prompts better.

 Implement Recovery Mode
 ---

 Key: HDFS-3004
 URL: https://issues.apache.org/jira/browse/HDFS-3004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3004.010.patch, HDFS-3004.011.patch, 
 HDFS-3004.012.patch, HDFS-3004.013.patch, HDFS-3004.015.patch, 
 HDFS-3004.016.patch, HDFS-3004.017.patch, HDFS-3004.018.patch, 
 HDFS-3004.019.patch, HDFS-3004.020.patch, HDFS-3004.022.patch, 
 HDFS-3004.023.patch, HDFS-3004.024.patch, HDFS-3004.026.patch, 
 HDFS-3004.027.patch, HDFS-3004.029.patch, HDFS-3004.030.patch, 
 HDFS-3004.031.patch, HDFS-3004.032.patch, HDFS-3004.033.patch, 
 HDFS-3004__namenode_recovery_tool.txt


 When the NameNode metadata is corrupt for some reason, we want to be able to 
 fix it.  Obviously, we would prefer never to get in this case.  In a perfect 
 world, we never would.  However, bad data on disk can happen from time to 
 time, because of hardware errors or misconfigurations.  In the past we have 
 had to correct it manually, which is time-consuming and which can result in 
 downtime.
 Recovery mode is initialized by the system administrator.  When the NameNode 
 starts up in Recovery Mode, it will try to load the FSImage file, apply all 
 the edits from the edits log, and then write out a new image.  Then it will 
 shut down.
 Unlike in the normal startup process, the recovery mode startup process will 
 be interactive.  When the NameNode finds something that is inconsistent, it 
 will prompt the operator as to what it should do.   The operator can also 
 choose to take the first option for all prompts by starting up with the '-f' 
 flag, or typing 'a' at one of the prompts.
 I have reused as much code as possible from the NameNode in this tool.  
 Hopefully, the effort that was spent developing this will also make the 
 NameNode editLog and image processing even more robust than it already is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240577#comment-13240577
 ] 

Hadoop QA commented on HDFS-3156:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520239/HDFS-3156.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.TestGetBlocks
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2109//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2109//console

This message is automatically generated.

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.24.0, 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240579#comment-13240579
 ] 

Aaron T. Myers commented on HDFS-3156:
--

Those two tests are known to be failing on trunk (HDFS-3142 and HDFs-3143). I'm 
going to commit this shortly.

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3156:
-

 Target Version/s: 0.23.3  (was: 0.23.3, 0.24.0)
Affects Version/s: (was: 0.24.0)

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3156:
-

 Target Version/s: 2.0.0  (was: 0.23.3)
Affects Version/s: (was: 0.23.3)
   2.0.0

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3156:
-

   Resolution: Fixed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2. Thanks a lot for the quick 
review, Todd.

 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240591#comment-13240591
 ] 

Hudson commented on HDFS-3156:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2014 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2014/])
HDFS-3156. TestDFSHAAdmin is failing post HADOOP-8202. Contributed by Aaron 
T. Myers. (Revision 1306517)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306517
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240595#comment-13240595
 ] 

Hudson commented on HDFS-3156:
--

Integrated in Hadoop-Common-trunk-Commit #1939 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1939/])
HDFS-3156. TestDFSHAAdmin is failing post HADOOP-8202. Contributed by Aaron 
T. Myers. (Revision 1306517)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306517
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240598#comment-13240598
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3143:
--

Hi Aaron and Todd,

The changes probably is not intentional but it is a good change since the class 
name should not be in the message.  Exception messages are not APIs and so it 
can be changed anytime.  Also, test should avoid depending on the message 
format.  However, CLI error messages are a kind of public APIs.  They should 
not be changed unless there are bugs.

Since the new CLI re-write is not out yet and it is better not to have the 
class name in the message, I think we should keep the current message format 
and fix the tests.  What do you think?

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240603#comment-13240603
 ] 

Todd Lipcon commented on HDFS-3143:
---

Seems reasonable to me. We should mark HADOOP-8184 as an incompatible change 
and make sure this is noted in the release notes. Though they're not public 
APIs, there are cases in which I've seen software match against exception error 
messages -- mostly because we have a very flat exception hierarchy in Hadoop. 
For example, HBase's FSHDFSUtils class does this in a few places to distinguish 
different cases of lease expiration.

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240606#comment-13240606
 ] 

Aaron T. Myers commented on HDFS-3143:
--

bq. Seems reasonable to me. We should mark HADOOP-8184 as an incompatible 
change and make sure this is noted in the release notes.

+1

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240609#comment-13240609
 ] 

Aaron T. Myers commented on HDFS-3143:
--

Also, +1 to this patch. I'll commit this shortly.

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1218) 20 append: Blocks recovered on startup should be treated with lower priority during block synchronization

2012-03-28 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240611#comment-13240611
 ] 

Todd Lipcon commented on HDFS-1218:
---

Hi Uma. The idea was to exclude the restarted node for length calculation. It 
looks like you're right that we aren't putting them in syncList at all, whereas 
we could put them in syncList in the case that they have length = the 
calculated minlength.

However, it's still the case that DN3 might be shorter than the good replicas, 
and not included in recovery. In that case, it should be deleted when it 
reports the block with the too-low GS later. I guess the real issue is that we 
don't include all RBW blocks in block reports in the 1.0 implementation, so it 
sticks around forever?

 20 append: Blocks recovered on startup should be treated with lower priority 
 during block synchronization
 -

 Key: HDFS-1218
 URL: https://issues.apache.org/jira/browse/HDFS-1218
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20-append
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 0.20.205.0

 Attachments: HDFS-1218.20s.2.patch, hdfs-1281.txt


 When a datanode experiences power loss, it can come back up with truncated 
 replicas (due to local FS journal replay). Those replicas should not be 
 allowed to truncate the block during block synchronization if there are other 
 replicas from DNs that have _not_ restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3004) Implement Recovery Mode

2012-03-28 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3004:
---

Attachment: HDFS-3004.034.patch

* address Todd's suggestions

 Implement Recovery Mode
 ---

 Key: HDFS-3004
 URL: https://issues.apache.org/jira/browse/HDFS-3004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3004.010.patch, HDFS-3004.011.patch, 
 HDFS-3004.012.patch, HDFS-3004.013.patch, HDFS-3004.015.patch, 
 HDFS-3004.016.patch, HDFS-3004.017.patch, HDFS-3004.018.patch, 
 HDFS-3004.019.patch, HDFS-3004.020.patch, HDFS-3004.022.patch, 
 HDFS-3004.023.patch, HDFS-3004.024.patch, HDFS-3004.026.patch, 
 HDFS-3004.027.patch, HDFS-3004.029.patch, HDFS-3004.030.patch, 
 HDFS-3004.031.patch, HDFS-3004.032.patch, HDFS-3004.033.patch, 
 HDFS-3004.034.patch, HDFS-3004__namenode_recovery_tool.txt


 When the NameNode metadata is corrupt for some reason, we want to be able to 
 fix it.  Obviously, we would prefer never to get in this case.  In a perfect 
 world, we never would.  However, bad data on disk can happen from time to 
 time, because of hardware errors or misconfigurations.  In the past we have 
 had to correct it manually, which is time-consuming and which can result in 
 downtime.
 Recovery mode is initialized by the system administrator.  When the NameNode 
 starts up in Recovery Mode, it will try to load the FSImage file, apply all 
 the edits from the edits log, and then write out a new image.  Then it will 
 shut down.
 Unlike in the normal startup process, the recovery mode startup process will 
 be interactive.  When the NameNode finds something that is inconsistent, it 
 will prompt the operator as to what it should do.   The operator can also 
 choose to take the first option for all prompts by starting up with the '-f' 
 flag, or typing 'a' at one of the prompts.
 I have reused as much code as possible from the NameNode in this tool.  
 Hopefully, the effort that was spent developing this will also make the 
 NameNode editLog and image processing even more robust than it already is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3156) TestDFSHAAdmin is failing post HADOOP-8202

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240624#comment-13240624
 ] 

Hudson commented on HDFS-3156:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1952 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1952/])
HDFS-3156. TestDFSHAAdmin is failing post HADOOP-8202. Contributed by Aaron 
T. Myers. (Revision 1306517)

 Result = ABORTED
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306517
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 TestDFSHAAdmin is failing post HADOOP-8202
 --

 Key: HDFS-3156
 URL: https://issues.apache.org/jira/browse/HDFS-3156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3156.patch


 TestDFSHAAdmin mocks a protocol object without implementing Closeable, which 
 is now required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3143:
-

 Target Version/s: 2.0.0  (was: 0.23.3, 0.24.0)
Affects Version/s: 2.0.0

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3143:
-

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Arpit. Thanks for the discussion, Todd and 
Nicholas.

 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Fix For: 2.0.0

 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240642#comment-13240642
 ] 

Hudson commented on HDFS-3143:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2015 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2015/])
HDFS-3143. TestGetBlocks.testGetBlocks is failing. Contributed by Arpit 
Gupta. (Revision 1306542)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306542
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestGetBlocks.java


 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Fix For: 2.0.0

 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240643#comment-13240643
 ] 

Hudson commented on HDFS-3143:
--

Integrated in Hadoop-Common-trunk-Commit #1940 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1940/])
HDFS-3143. TestGetBlocks.testGetBlocks is failing. Contributed by Arpit 
Gupta. (Revision 1306542)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306542
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestGetBlocks.java


 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Fix For: 2.0.0

 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240641#comment-13240641
 ] 

Aaron T. Myers commented on HDFS-3142:
--

Hey Brandon, just FYI - per the discussion on HDFS-3143, just updating the 
expected CLI output will be sufficient to address this issue. Whenever you post 
a patch, I'll be sure to review it promptly.

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker

 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3142) TestHDFSCLI.testAll is failing

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3142:
-

 Target Version/s: 2.0.0  (was: 0.23.3)
Affects Version/s: 2.0.0

 TestHDFSCLI.testAll is failing
 --

 Key: HDFS-3142
 URL: https://issues.apache.org/jira/browse/HDFS-3142
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Brandon Li
Priority: Blocker

 TestHDFSCLI.testAll is failing in the latest trunk/23 builds. Last good build 
 was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3139) Minor Datanode logging improvement

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240647#comment-13240647
 ] 

Aaron T. Myers commented on HDFS-3139:
--

+1, the patch looks good to me. Those two failing test cases are unrelated.

 Minor Datanode logging improvement
 --

 Key: HDFS-3139
 URL: https://issues.apache.org/jira/browse/HDFS-3139
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Attachments: hdfs-3139.txt, hdfs-3139.txt


 - DatanodeInfo#getDatanodeReport should log its hostname, in addition to the 
 DNS lookup it does on its IP
 - Datanode should log the ipc/info/streaming servers its listening on at 
 startup at INFO level

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3139) Minor Datanode logging improvement

2012-03-28 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3139:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 0.23.3)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks ATM. I've committed this and merged to branch-2.

 Minor Datanode logging improvement
 --

 Key: HDFS-3139
 URL: https://issues.apache.org/jira/browse/HDFS-3139
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.0

 Attachments: hdfs-3139.txt, hdfs-3139.txt


 - DatanodeInfo#getDatanodeReport should log its hostname, in addition to the 
 DNS lookup it does on its IP
 - Datanode should log the ipc/info/streaming servers its listening on at 
 startup at INFO level

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3139) Minor Datanode logging improvement

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240668#comment-13240668
 ] 

Hudson commented on HDFS-3139:
--

Integrated in Hadoop-Common-trunk-Commit #1941 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1941/])
HDFS-3139. Minor Datanode logging improvement. Contributed by Eli Collins 
(Revision 1306549)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306549
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeInfo.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/SecureDataNodeStarter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSAddressConfig.java


 Minor Datanode logging improvement
 --

 Key: HDFS-3139
 URL: https://issues.apache.org/jira/browse/HDFS-3139
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: data-node
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.0

 Attachments: hdfs-3139.txt, hdfs-3139.txt


 - DatanodeInfo#getDatanodeReport should log its hostname, in addition to the 
 DNS lookup it does on its IP
 - Datanode should log the ipc/info/streaming servers its listening on at 
 startup at INFO level

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3143) TestGetBlocks.testGetBlocks is failing

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240669#comment-13240669
 ] 

Hudson commented on HDFS-3143:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1953 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1953/])
HDFS-3143. TestGetBlocks.testGetBlocks is failing. Contributed by Arpit 
Gupta. (Revision 1306542)

 Result = ABORTED
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306542
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestGetBlocks.java


 TestGetBlocks.testGetBlocks is failing
 --

 Key: HDFS-3143
 URL: https://issues.apache.org/jira/browse/HDFS-3143
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: Eli Collins
Assignee: Arpit Gupta
 Fix For: 2.0.0

 Attachments: HDFS-3143.patch


 TestGetBlocks.testGetBlocks is failing in the latest trunk/23 builds. Last 
 good build was Mar 23rd.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3004) Implement Recovery Mode

2012-03-28 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240693#comment-13240693
 ] 

Hadoop QA commented on HDFS-3004:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520296/HDFS-3004.034.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 21 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.TestGetBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2110//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2110//console

This message is automatically generated.

 Implement Recovery Mode
 ---

 Key: HDFS-3004
 URL: https://issues.apache.org/jira/browse/HDFS-3004
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: tools
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-3004.010.patch, HDFS-3004.011.patch, 
 HDFS-3004.012.patch, HDFS-3004.013.patch, HDFS-3004.015.patch, 
 HDFS-3004.016.patch, HDFS-3004.017.patch, HDFS-3004.018.patch, 
 HDFS-3004.019.patch, HDFS-3004.020.patch, HDFS-3004.022.patch, 
 HDFS-3004.023.patch, HDFS-3004.024.patch, HDFS-3004.026.patch, 
 HDFS-3004.027.patch, HDFS-3004.029.patch, HDFS-3004.030.patch, 
 HDFS-3004.031.patch, HDFS-3004.032.patch, HDFS-3004.033.patch, 
 HDFS-3004.034.patch, HDFS-3004__namenode_recovery_tool.txt


 When the NameNode metadata is corrupt for some reason, we want to be able to 
 fix it.  Obviously, we would prefer never to get in this case.  In a perfect 
 world, we never would.  However, bad data on disk can happen from time to 
 time, because of hardware errors or misconfigurations.  In the past we have 
 had to correct it manually, which is time-consuming and which can result in 
 downtime.
 Recovery mode is initialized by the system administrator.  When the NameNode 
 starts up in Recovery Mode, it will try to load the FSImage file, apply all 
 the edits from the edits log, and then write out a new image.  Then it will 
 shut down.
 Unlike in the normal startup process, the recovery mode startup process will 
 be interactive.  When the NameNode finds something that is inconsistent, it 
 will prompt the operator as to what it should do.   The operator can also 
 choose to take the first option for all prompts by starting up with the '-f' 
 flag, or typing 'a' at one of the prompts.
 I have reused as much code as possible from the NameNode in this tool.  
 Hopefully, the effort that was spent developing this will also make the 
 NameNode editLog and image processing even more robust than it already is.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240697#comment-13240697
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

If the block remains over-replicated forever, then it is a bug.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3155:
-

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have committed this to trunk and branch-2.

 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240721#comment-13240721
 ] 

Hudson commented on HDFS-3155:
--

Integrated in Hadoop-Common-trunk-Commit #1942 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1942/])
HDFS-3155. Clean up FSDataset implemenation related code. (Revision 1306582)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306582
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HAStressTestHarness.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java


 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240720#comment-13240720
 ] 

Hudson commented on HDFS-3155:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2017 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2017/])
HDFS-3155. Clean up FSDataset implemenation related code. (Revision 1306582)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306582
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HAStressTestHarness.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java


 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3158) LiveNodes mebmber of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Aaron T. Myers (Created) (JIRA)
LiveNodes mebmber of NameNodeMXBean should list non-DFS used space and capacity 
per DN
--

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


The LiveNodes section already lists the DFS used space per DN. It would be 
nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1218) 20 append: Blocks recovered on startup should be treated with lower priority during block synchronization

2012-03-28 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240746#comment-13240746
 ] 

Todd Lipcon commented on HDFS-1218:
---

Since this issue has been closed a long time, mind opening a new one against 
branch-1? If you could come up with a test case that would also be great. Seems 
like you could modify the existing test cases just to make sure that the other 
replica eventually gets removed.

 20 append: Blocks recovered on startup should be treated with lower priority 
 during block synchronization
 -

 Key: HDFS-1218
 URL: https://issues.apache.org/jira/browse/HDFS-1218
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20-append
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 0.20.205.0

 Attachments: HDFS-1218.20s.2.patch, hdfs-1281.txt


 When a datanode experiences power loss, it can come back up with truncated 
 replicas (due to local FS journal replay). Those replicas should not be 
 allowed to truncate the block during block synchronization if there are other 
 replicas from DNs that have _not_ restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3158:
-

Summary: LiveNodes member of NameNodeMXBean should list non-DFS used space 
and capacity per DN  (was: LiveNodes mebmber of NameNodeMXBean should list 
non-DFS used space and capacity per DN)

Misspelled member in the summary.

 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3158:
-

Status: Patch Available  (was: Open)

 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3158:
-

Attachment: HDFS-3158.patch

Here's a patch for trunk which addresses the issue.

 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3155) Clean up FSDataset implemenation related code.

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240749#comment-13240749
 ] 

Hudson commented on HDFS-3155:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1955 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1955/])
HDFS-3155. Clean up FSDataset implemenation related code. (Revision 1306582)

 Result = ABORTED
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306582
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaUnderRecovery.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLeaseRecovery2.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPipelines.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/DataNodeTestUtils.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HAStressTestHarness.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDNFencing.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyIsHot.java


 Clean up FSDataset implemenation related code.
 --

 Key: HDFS-3155
 URL: https://issues.apache.org/jira/browse/HDFS-3155
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 2.0.0

 Attachments: h3155_20120327.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240771#comment-13240771
 ] 

Eli Collins commented on HDFS-3158:
---

+1 lgtm

 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3159) Document NN auto-failover setup and configuration

2012-03-28 Thread Todd Lipcon (Created) (JIRA)
Document NN auto-failover setup and configuration
-

 Key: HDFS-3159
 URL: https://issues.apache.org/jira/browse/HDFS-3159
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: auto-failover, documentation, ha
Affects Versions: Auto failover (HDFS-3042)
Reporter: Todd Lipcon


We should document how to configure, set up, and monitor an automatic failover 
setup. This will require adding the new configs to the *-default.xml and adding 
prose to the apt docs as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Aaron T. Myers (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HDFS-3000:


Assignee: Aaron T. Myers

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240796#comment-13240796
 ] 

Aaron T. Myers commented on HDFS-3000:
--

I've given this some thought and come to the conclusion that we should just add 
{{setQuota(Path p, long nsQuota, long dsQuota)}} to the public interface of 
o.a.h.fs.FileSystem. Doing so shouldn't be incompatible, as we're just adding a 
net new method.

The concept of a quota is already somewhat exposed in o.a.h.fs.FileSystem, 
since {{getContentSummary(Path)}} is included in FileSystem, which returns an 
object containing quota information. FileSystem implementations which don't 
support quotas return -1 for the quota fields. We could either add a no-op 
default implementation of {{setQuota(...)}} to FileSystem, or add a default 
implementation that throws an UnsupportedOperationException, as is done for 
{{o.a.h.fs.FileSystem#listCorruptFileBlocks}}.

If people are OK with this proposal, I'll whip up a patch real quick.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240806#comment-13240806
 ] 

Hadoop QA commented on HDFS-3158:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520324/HDFS-3158.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.hdfs.server.common.TestDistributedUpgrade
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2111//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2111//console

This message is automatically generated.

 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Aaron T. Myers (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-3158:
-

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Those two test failures are unrelated. One is known to be failing on trunk, the 
other frequently fails spuriously.

I've just committed this to trunk and branch-2. Thanks a lot for the quick 
review, Eli.

 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240816#comment-13240816
 ] 

Hudson commented on HDFS-3158:
--

Integrated in Hadoop-Hdfs-trunk-Commit #2018 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2018/])
HDFS-3158. LiveNodes member of NameNodeMXBean should list non-DFS used 
space and capacity per DN. Contributed by Aaron T. Myers. (Revision 1306635)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306635
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Roman Shaposhnik (Created) (JIRA)
httpfs should exec catalina instead of forking it
-

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 0.23.2


In Bigtop we would like to start supporting constant monitoring of the running 
daemons (BIGTOP-263). It would be nice if Oozie can support that requirement by 
execing Catalina instead of forking it off. Currently we have to track down the 
actual process being monitored through the script that still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240817#comment-13240817
 ] 

Hudson commented on HDFS-3158:
--

Integrated in Hadoop-Common-trunk-Commit #1943 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1943/])
HDFS-3158. LiveNodes member of NameNodeMXBean should list non-DFS used 
space and capacity per DN. Contributed by Aaron T. Myers. (Revision 1306635)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306635
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Roman Shaposhnik (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HDFS-3160:
---

Attachment: HDFS-3160.patch.txt

 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 0.23.2

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Roman Shaposhnik (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HDFS-3160:
---

Status: Patch Available  (was: Open)

 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 0.23.2

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240825#comment-13240825
 ] 

Eli Collins commented on HDFS-3000:
---

On one hand quotas are not HDFS-specific, so fine moving them out of 
DistributedFileSystem. The quota interface isn't entirely generic, eg I'm not 
sure other file systems would combine the namespace and diskspace values in one 
API, but I suspect it's reasonably generic enough to be useful by another FS.

On the other FileSystem, nor FileContext which would also need this, doesn't 
currently have an administrative-level interface (which quotas are).  Would it 
be better instead to have a new public class for administrative-level APIs that 
is public / maintained?  Eg in the same way we have haadmin and dfshaadmin, 
have both a generic and hdfs-specific dfs admin classes, and make the new one 
generic and public? 

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Colin Patrick McCabe (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240827#comment-13240827
 ] 

Colin Patrick McCabe commented on HDFS-3000:


Random thought:  Does it make sense to add more functions to the FileSystem 
API?  I thought it was deprecated in favor of FileContext.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3050) refactor OEV to share more code with the NameNode

2012-03-28 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3050:
---

Attachment: HDFS-3050.014.patch

* fix unit test

* enable reading edit logs from XML 

* add -f / -fix-txids option, which makes oev close any holes in the 
transaction ID series.

* Editing the XML no longer allows you to manually recalculate checksums for 
the edited opcode.

 refactor OEV to share more code with the NameNode
 -

 Key: HDFS-3050
 URL: https://issues.apache.org/jira/browse/HDFS-3050
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3050.006.patch, HDFS-3050.007.patch, 
 HDFS-3050.008.patch, HDFS-3050.009.patch, HDFS-3050.010.patch, 
 HDFS-3050.011.patch, HDFS-3050.012.patch, HDFS-3050.014.patch


 Current, OEV (the offline edits viewer) re-implements all of the opcode 
 parsing logic found in the NameNode.  This duplicated code creates a 
 maintenance burden for us.
 OEV should be refactored to simply use the normal EditLog parsing code, 
 rather than rolling its own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240837#comment-13240837
 ] 

Aaron T. Myers commented on HDFS-3000:
--

bq. On one hand quotas are not HDFS-specific, so fine moving them out of 
DistributedFileSystem. The quota interface isn't entirely generic, eg I'm not 
sure other file systems would combine the namespace and diskspace values in one 
API, but I suspect it's reasonably generic enough to be useful by another FS.

This was my thinking as well.

bq. On the other FileSystem, nor FileContext which would also need this, 
doesn't currently have an administrative-level interface (which quotas are). 
Would it be better instead to have a new public class for administrative-level 
APIs that is public / maintained? Eg in the same way we have haadmin and 
dfshaadmin, have both a generic and hdfs-specific dfs admin classes, and make 
the new one generic and public?

I agree with you that HDFS could stand to have some public administrative API, 
for example to get/set safe mode, but I don't think setQuotas should belong in 
that API for a few reasons:

# I don't think it's necessarily an administrative command per se. At least, 
I don't see why there's a meaningful distinction between setting quotas and, 
say, chown.
# There'd still be the issue of the asymmetry between having to get quotas via 
FileSystem, but set them via some other interface, which seems goofy to me.

Does this reasoning make sense?

You make a good point about FileContext. I'll add that to the patch.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3158) LiveNodes member of NameNodeMXBean should list non-DFS used space and capacity per DN

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240841#comment-13240841
 ] 

Hudson commented on HDFS-3158:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1956 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1956/])
HDFS-3158. LiveNodes member of NameNodeMXBean should list non-DFS used 
space and capacity per DN. Contributed by Aaron T. Myers. (Revision 1306635)

 Result = ABORTED
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306635
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeMXBean.java


 LiveNodes member of NameNodeMXBean should list non-DFS used space and 
 capacity per DN
 -

 Key: HDFS-3158
 URL: https://issues.apache.org/jira/browse/HDFS-3158
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0

 Attachments: HDFS-3158.patch


 The LiveNodes section already lists the DFS used space per DN. It would be 
 nice if it also listed the non-DFS used space and the capacity per DN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3050) refactor OEV to share more code with the NameNode

2012-03-28 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240842#comment-13240842
 ] 

Hadoop QA commented on HDFS-3050:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520341/HDFS-3050.014.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2113//console

This message is automatically generated.

 refactor OEV to share more code with the NameNode
 -

 Key: HDFS-3050
 URL: https://issues.apache.org/jira/browse/HDFS-3050
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3050.006.patch, HDFS-3050.007.patch, 
 HDFS-3050.008.patch, HDFS-3050.009.patch, HDFS-3050.010.patch, 
 HDFS-3050.011.patch, HDFS-3050.012.patch, HDFS-3050.014.patch


 Current, OEV (the offline edits viewer) re-implements all of the opcode 
 parsing logic found in the NameNode.  This duplicated code creates a 
 maintenance burden for us.
 OEV should be refactored to simply use the normal EditLog parsing code, 
 rather than rolling its own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240843#comment-13240843
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3000:
--

I agree with Eli that it is better to add an Admin API then adding them to 
FileSystem.  The namespace quota is quite specific to HDFS.

 I don't think it's necessarily an administrative command per se. At least, 
 I don't see why there's a meaningful distinction between setting quotas and, 
 say, chown.

Setting quota is pure admin method but users could use chown for changing 
groups although they cannot change owner.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240849#comment-13240849
 ] 

Aaron T. Myers commented on HDFS-3000:
--

bq. The namespace quota is quite specific to HDFS.

Is it? I don't have any specific counterexamples, but I'd be surprised if other 
file systems didn't have a similar concept, given that INodes are usually a 
finite resource.

bq. Setting quota is pure admin method but users could use chown for changing 
groups although they cannot change owner.

I suppose that distinction is reasonable. The distinction, then, is a 
requirement of superuser privileges in all cases.

But, that still leaves open the issue of the asymmetry between getting/setting 
quotas. Does that not concern you at all, Nicholas?

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3050) refactor OEV to share more code with the NameNode

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240850#comment-13240850
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3050:
--

- If XMLUtils is supposed to be generic, please add javadoc for all public 
classes/methods.  In general, please add javadoc for all public classes/methods.

- Please don't change the indentation in editsStored.xml.

 refactor OEV to share more code with the NameNode
 -

 Key: HDFS-3050
 URL: https://issues.apache.org/jira/browse/HDFS-3050
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-3050.006.patch, HDFS-3050.007.patch, 
 HDFS-3050.008.patch, HDFS-3050.009.patch, HDFS-3050.010.patch, 
 HDFS-3050.011.patch, HDFS-3050.012.patch, HDFS-3050.014.patch


 Current, OEV (the offline edits viewer) re-implements all of the opcode 
 parsing logic found in the NameNode.  This duplicated code creates a 
 maintenance burden for us.
 OEV should be refactored to simply use the normal EditLog parsing code, 
 rather than rolling its own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240851#comment-13240851
 ] 

Aaron T. Myers commented on HDFS-3000:
--

bq. Random thought: Does it make sense to add more functions to the FileSystem 
API? I thought it was deprecated in favor of FileContext.

I think it does, as the vast majority of users are still on FileSystem. Of 
course, this discussion might be moot if we end up not adding anything to 
FileSystem at all, per Nicholas's suggestion. So let me get back to you on that 
one, Colin, once we come to a conclusion on the administrative API issue. :)

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240852#comment-13240852
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3000:
--

 Is it? I don't have any specific counterexamples, ...

If you cannot easily find an counterexample, does it qualify for quite 
specific?

 ... Does that not concern you at all, Nicholas?

No.  There are many values that users can read but not update, e.g. hostname, 
other network configurations, home directory, etc.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240853#comment-13240853
 ] 

Eli Collins commented on HDFS-3000:
---

ATM has a point wrt the asymmetry, does it make sense to have setOwner and 
setQuota live in separate classes?  I thought FileSystem/FileContext we're 
purely non-admin, but that's not the case.

Another idea, we have an interface for administrative methods (setOwner, 
setQuota, etc) but have the implementation still live in FileSystem/Context.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-28 Thread Brandon Li (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li reassigned HDFS-3119:


Assignee: Brandon Li

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240854#comment-13240854
 ] 

Hadoop QA commented on HDFS-3160:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12520338/HDFS-3160.patch.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.cli.TestHDFSCLI

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2112//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2112//console

This message is automatically generated.

 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 0.23.2

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240858#comment-13240858
 ] 

Aaron T. Myers commented on HDFS-3000:
--

bq. If you cannot easily find an counterexample, does it qualify for quite 
specific?

I didn't say I couldn't find one - I said I didn't know. My not knowing if such 
things are common doesn't necessarily mean they're not. :)

bq. No. There are many values that users can read but not update, e.g. 
hostname, other network configurations, home directory, etc.

That seems reasonable to me. A separate administrative interface it is, then. 
I'll work on a patch for it.

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3160:
--

Target Version/s: 2.0.0
   Fix Version/s: (was: 0.23.2)
Hadoop Flags: Reviewed

+1  looks good

 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240871#comment-13240871
 ] 

Eli Collins commented on HDFS-3160:
---

Forgot to mention, the test failure is unrelated (HDFS-3142).

 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Eli Collins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3160:
--

  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
  Status: Resolved  (was: Patch Available)

I've committed this and merged to branch-2. Thanks Roman!

 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-1218) 20 append: Blocks recovered on startup should be treated with lower priority during block synchronization

2012-03-28 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240872#comment-13240872
 ] 

Uma Maheswara Rao G commented on HDFS-1218:
---

Thanks a lot Todd.

{quote}
However, it's still the case that DN3 might be shorter than the good replicas, 
and not included in recovery. In that case, it should be deleted when it 
reports the block with the too-low GS later. I guess the real issue is that we 
don't include all RBW blocks in block reports in the 1.0 implementation, so it 
sticks around forever?
{quote}

Exactly.

{quote}
Since this issue has been closed a long time, mind opening a new one against 
branch-1? If you could come up with a test case that would also be great. Seems 
like you could modify the existing test cases just to make sure that the other 
replica eventually gets removed.
{quote}
Sure. I will file one new bug and come up with testcase.

 20 append: Blocks recovered on startup should be treated with lower priority 
 during block synchronization
 -

 Key: HDFS-1218
 URL: https://issues.apache.org/jira/browse/HDFS-1218
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.20-append
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Critical
 Fix For: 0.20.205.0

 Attachments: HDFS-1218.20s.2.patch, hdfs-1281.txt


 When a datanode experiences power loss, it can come back up with truncated 
 replicas (due to local FS journal replay). Those replicas should not be 
 allowed to truncate the block during block synchronization if there are other 
 replicas from DNs that have _not_ restarted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-28 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240878#comment-13240878
 ] 

Uma Maheswara Rao G commented on HDFS-3119:
---

@Nicholas,
 I proposed the change in 
https://issues.apache.org/jira/browse/HDFS-3119?focusedCommentId=13240101page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13240101

Do you agree, that is reasonable?

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3122) Block recovery with closeFile flag true can race with blockReport. Due to this blocks are getting marked as corrupt.

2012-03-28 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240883#comment-13240883
 ] 

Uma Maheswara Rao G commented on HDFS-3122:
---

@Todd, Did you look at this issue?
Want to invite you on this issue for solution discussions. It looks like, it is 
hard to identify the block corruption at this situation. 

 Block recovery with closeFile flag true can race with blockReport. Due to 
 this blocks are getting marked as corrupt.
 

 Key: HDFS-3122
 URL: https://issues.apache.org/jira/browse/HDFS-3122
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node, name-node
Affects Versions: 0.23.0, 0.24.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
Priority: Critical
 Attachments: blockCorrupt.txt


 *Block Report* can *race* with *Block Recovery* with closeFile flag true.
  Block report generated just before block recovery at DN side and due to N/W 
 problems, block report got delayed to NN. 
 After this, recovery success and generation stamp modifies to new one. 
 And primary DN invokes the commitBlockSynchronization and block got updated 
 in NN side. Also block got marked as complete, since the closeFile flag was 
 true. Updated with new genstamp.
 Now blockReport started processing at NN side. This particular block from RBW 
 (when it generated the BR at DN), and file was completed at NN side.
 Finally block will be marked as corrupt because of genstamp mismatch.
 {code}
  case RWR:
   if (!storedBlock.isComplete()) {
 return null; // not corrupt
   } else if (storedBlock.getGenerationStamp() != 
 iblk.getGenerationStamp()) {
 return new BlockToMarkCorrupt(storedBlock,
 reported  + reportedState +  replica with genstamp  +
 iblk.getGenerationStamp() +  does not match COMPLETE block's  +
 genstamp in block map  + storedBlock.getGenerationStamp());
   } else { // COMPLETE block, same genstamp
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240884#comment-13240884
 ] 

Hudson commented on HDFS-3160:
--

Integrated in Hadoop-Common-trunk-Commit #1944 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1944/])
HDFS-3160. httpfs should exec catalina instead of forking it. Contributed 
by Roman Shaposhnik (Revision 1306665)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306665
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240909#comment-13240909
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

 I don't see any OverReplicated processing from neededReplication priority 
 Queues. We will just remove from needed replication queues. Am I missing?

computeReplicationWork(..) only takes care about replication but not deletion.  
Deletion is done by computeInvalidateWork(..).

addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of 
numCurrentReplica or fileReplication may be incorrect.  We should print them 
out for debugging.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-28 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240917#comment-13240917
 ] 

Uma Maheswara Rao G commented on HDFS-3119:
---

{quote}
addStoredBlock(..) does call processOverReplicatedBlock(..) but the values of 
numCurrentReplica or fileReplication may be incorrect.  We should print them 
out for debugging.
{quote}

Here addStoredBlock did not perform processOverReplicatedBlock because, all DNs 
reported block before moving finalizing the fileInodeUnderConstruction. 
addStoredBlock will just return if the block is underConstruction stage. After 
this step there is no other way to process this processOverReplicatedBlock  
again. Only the i am feeling is to checkReplication after finalized the block. 
That is checking currently only for neededReplications not for OverReplication.

This is reproducible. But issue will be random because of other scenario. If it 
meets min replication, it can finalize fileInodeUnderConstruction and then 
other addStoredBlocks can perform processOverReplicatedBlock. So, block can be 
invalidated. 

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3160) httpfs should exec catalina instead of forking it

2012-03-28 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240919#comment-13240919
 ] 

Hudson commented on HDFS-3160:
--

Integrated in Hadoop-Mapreduce-trunk-Commit #1957 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1957/])
HDFS-3160. httpfs should exec catalina instead of forking it. Contributed 
by Roman Shaposhnik (Revision 1306665)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306665
Files : 
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/sbin/httpfs.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 httpfs should exec catalina instead of forking it
 -

 Key: HDFS-3160
 URL: https://issues.apache.org/jira/browse/HDFS-3160
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0

 Attachments: HDFS-3160.patch.txt


 In Bigtop we would like to start supporting constant monitoring of the 
 running daemons (BIGTOP-263). It would be nice if Oozie can support that 
 requirement by execing Catalina instead of forking it off. Currently we have 
 to track down the actual process being monitored through the script that 
 still hangs around.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3119) Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240925#comment-13240925
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3119:
--

 Here addStoredBlock did not perform processOverReplicatedBlock because, all 
 DNs reported block before moving finalizing the fileInodeUnderConstruction. 
 ...

Good point!  If the block is in COMPLETE state, I think we could handle 
over/under-replicated cases.

 Overreplicated block is not deleted even after the replication factor is 
 reduced after sync follwed by closing that file
 

 Key: HDFS-3119
 URL: https://issues.apache.org/jira/browse/HDFS-3119
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.24.0
Reporter: J.Andreina
Assignee: Brandon Li
Priority: Minor
 Fix For: 0.24.0, 0.23.2


 cluster setup:
 --
 1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB
 step1: write a file filewrite.txt of size 90bytes with sync(not closed) 
 step2: change the replication factor to 1  using the command: ./hdfs dfs 
 -setrep 1 /filewrite.txt
 step3: close the file
 * At the NN side the file Decreasing replication from 2 to 1 for 
 /filewrite.txt , logs has occured but the overreplicated blocks are not 
 deleted even after the block report is sent from DN
 * while listing the file in the console using ./hdfs dfs -ls  the 
 replication factor for that file is mentioned as 1
 * In fsck report for that files displays that the file is replicated to 2 
 datanodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3154) Add a notion of immutable/mutable files

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240937#comment-13240937
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3154:
--

 ... Generally guarantees of immutability has use cases in the legal/SEC 
 environments.

This is a good point!  I did not think of it before.  The immutability indeed 
is a useful feature.

 Add a notion of immutable/mutable files
 ---

 Key: HDFS-3154
 URL: https://issues.apache.org/jira/browse/HDFS-3154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE

 The notion of immutable file is useful since it lets the system and tools 
 optimize certain things as discussed in [this email 
 thread|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3CCAPn_vTuZomPmBTypP8_1xTr49Sj0fy7Mjhik4DbcAA+BLH53=g...@mail.gmail.com%3E].
   Also, many applications require only immutable files.  Here is a proposal:
 - Immutable files means that the file content is immutable.  Operations such 
 as append and truncate that change the file content are not allowed to act on 
 immutable files.  However, the meta data such as replication and permission 
 of an immutable file can be updated.  Immutable files can also be deleted or 
 renamed.
 - Users have to pass immutable/mutable as a flag in file creation.  This is 
 an unmodifiable property of the created file.
 - If users want to change the data in an immutable file, the file could be 
 copied to another file which is created as mutable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3154) Add a notion of immutable/mutable files

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240936#comment-13240936
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3154:
--

 So it is not hard to implement. ...

I am saying caching on immutable files is more efficient in run time 
performance but not about the difficulty of implementation.

 Making every file immutable introduces more complications for the users of 
 HDFS. ...

We are not making all files immutable.  Users could create mutable files.

But I do propose that the default is to file creation is immutable since all 
the current applications requires only immutable (since append is not a stable 
release yet.)  It opens an opportunity for performance improvement.  For 
example, [Scott's 
comment|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3ccb927556.8de2b%25sc...@richrelevance.com%3E]
 on extent can be implemented only for mutable files.

 Add a notion of immutable/mutable files
 ---

 Key: HDFS-3154
 URL: https://issues.apache.org/jira/browse/HDFS-3154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE

 The notion of immutable file is useful since it lets the system and tools 
 optimize certain things as discussed in [this email 
 thread|http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201203.mbox/%3CCAPn_vTuZomPmBTypP8_1xTr49Sj0fy7Mjhik4DbcAA+BLH53=g...@mail.gmail.com%3E].
   Also, many applications require only immutable files.  Here is a proposal:
 - Immutable files means that the file content is immutable.  Operations such 
 as append and truncate that change the file content are not allowed to act on 
 immutable files.  However, the meta data such as replication and permission 
 of an immutable file can be updated.  Immutable files can also be deleted or 
 renamed.
 - Users have to pass immutable/mutable as a flag in file creation.  This is 
 an unmodifiable property of the created file.
 - If users want to change the data in an immutable file, the file could be 
 copied to another file which is created as mutable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Allen Wittenauer (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240940#comment-13240940
 ] 

Allen Wittenauer commented on HDFS-3000:


File count quotas are extremely common and hardly unique to HDFS.  Easy 
examples: UFS, ZFS, WAFL, VxFS, ...

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HDFS-3130) Move FSDataset implemenation to a package

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-3130:
-

Attachment: svn_mv.sh
h3130_20120328_svn_mv.patch

svn_mv.sh: a script for svn mv
h3130_20120328_svn_mv.patch: for reviewing.

 Move FSDataset implemenation to a package
 -

 Key: HDFS-3130
 URL: https://issues.apache.org/jira/browse/HDFS-3130
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Attachments: h3130_20120328_svn_mv.patch, svn_mv.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3000) Add a public API for setting quotas

2012-03-28 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240944#comment-13240944
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-3000:
--

@Allen, Good to know and thanks for the input.  I still don't agree that it is 
extremely common.  :)

 Add a public API for setting quotas
 ---

 Key: HDFS-3000
 URL: https://issues.apache.org/jira/browse/HDFS-3000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client
Affects Versions: 0.23.1
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers

 Currently one can set the quota of a file or directory from the command line, 
 but if a user wants to set it programmatically, they need to use 
 DistributedFileSystem, which is annotated InterfaceAudience.Private.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3137) Bump LAST_UPGRADABLE_LAYOUT_VERSION

2012-03-28 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13240948#comment-13240948
 ] 

Eli Collins commented on HDFS-3137:
---

For sanity I'll upgrade from a 0.18 install to a tarball generated from a build 
with this change.

 Bump LAST_UPGRADABLE_LAYOUT_VERSION
 ---

 Key: HDFS-3137
 URL: https://issues.apache.org/jira/browse/HDFS-3137
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
 Attachments: hdfs-3137.txt


 LAST_UPGRADABLE_LAYOUT_VERSION is currently -7, which corresponds to Hadoop 
 0.14. How about we bump it to -16, which corresponds to Hadoop 0.18?
 I don't think many people are using releases older than v0.18, and those who 
 are probably want to upgrade to the latest stable release (v1.0). To upgrade 
 to eg 0.23 they can still upgrade to v1.0 first and then upgrade again from 
 there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >