[jira] [Commented] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215365#comment-14215365
 ] 

Keith Pak commented on HDFS-7401:
-

The tests pass locally for me and I don't see logging levels affecting blocks.

> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7401:

Status: Patch Available  (was: In Progress)

> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7401 stopped by Keith Pak.
---
> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7401 started by Keith Pak.
---
> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7401 started by Keith Pak.
---
> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7401:

Attachment: HDFS-7401.patch

> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7401:

Attachment: (was: HDFS-7401.patch)

> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7401:

Attachment: HDFS-7401.patch

Patch attached to move "blk" initialization out of the try and log that. 
Logging targetBlock is another option. 


> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7401.patch
>
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7401) Add block info to DFSInputStream' WARN message when it adds node to deadNodes

2014-11-17 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak reassigned HDFS-7401:
---

Assignee: Keith Pak

> Add block info to DFSInputStream' WARN message when it adds node to deadNodes
> -
>
> Key: HDFS-7401
> URL: https://issues.apache.org/jira/browse/HDFS-7401
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Keith Pak
>Priority: Minor
>
> Block info is missing in the below message
> {noformat}
> 2014-11-14 03:59:00,386 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /xx.xx.xx.xxx:50010 for block, add to deadNodes and continue. 
> java.io.IOException: Got error for OP_READ_BLOCK
> {noformat}
> The code
> {noformat}
> DFSInputStream.java
>   DFSClient.LOG.warn("Failed to connect to " + targetAddr + " for 
> block"
> + ", add to deadNodes and continue. " + ex, ex);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7394:

Status: Open  (was: Patch Available)

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7394.patch
>
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7394:

Attachment: HDFS-7394.patch

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7394.patch
>
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7394:

Status: Patch Available  (was: Open)

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7394.patch
>
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7394:

Attachment: (was: HDFS-7394.patch)

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7394.patch
>
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7394:

Attachment: HDFS-7394.patch

Attached patch

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
> Attachments: HDFS-7394.patch
>
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-7394:

Status: Patch Available  (was: Open)

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7394 started by Keith Pak.
---
> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7394 stopped by Keith Pak.
---
> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work stopped] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7394 stopped by Keith Pak.
---
> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-7394 started by Keith Pak.
---
> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7394) Log at INFO level when InvalidToken is seen in ShortCircuitCache

2014-11-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak reassigned HDFS-7394:
---

Assignee: Keith Pak

> Log at INFO level when InvalidToken is seen in ShortCircuitCache
> 
>
> Key: HDFS-7394
> URL: https://issues.apache.org/jira/browse/HDFS-7394
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Keith Pak
>Priority: Minor
>
> For long running clients, getting an {{InvalidToken}} exception is expected 
> and the client refetches a block token when it happens.  The related events 
> are logged at INFO except the ones in {{ShortCircuitCache}}.  It will be 
> better if they are also made to log at INFO.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-16 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Attachment: HDFS-6325.patch

Attached patch.

Addressed the above comments and passed the TestFileAppend* tests locally. 

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325.patch, 
> HDFS-6325.patch, HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-15 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Attachment: HDFS-6325.patch

1. Fixed isSufficientlyReplicated and comments for minimum replication check. 
2. Added a isComplete check so we dont check underconstruction files in 
FSNamesystem. 
3. Fixed lines in test. 

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325_test.patch, 
> appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-14 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Attachment: (was: HDFS-6325.patch)

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-12 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Attachment: HDFS-6325.patch

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325.patch, 
> HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-12 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Status: Patch Available  (was: In Progress)

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325_test.patch, 
> appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-12 Thread Keith Pak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13995918#comment-13995918
 ] 

Keith Pak commented on HDFS-6325:
-

testFileAppend4 failure: 
There may be an overlap of shutdown of testAppendInsufficientLocations and 
startup of testCompleteOtherLeaseHoldersFile. 
I changed testAppendInsufficientLocations to use the field variable "cluster" 
instead of creating my own as a local variable and also reduced the number of 
DNs from 6 to 4. Otherwise, I see that testCompleteOtherLeaseHoldersFile is 
progressing so I think a bigger timeout may be required as stated in this JIRA: 
https://issues.apache.org/jira/browse/HADOOP-8596. 

TestBalancerWithNodeGroup: 
This also seems to be an intermittent issue from 
https://issues.apache.org/jira/browse/HDFS-6250. 







> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325.patch, 
> HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-12 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Attachment: HDFS-6325.patch

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325.patch, HDFS-6325_test.patch, 
> appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-11 Thread Keith Pak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keith Pak updated HDFS-6325:


Attachment: HDFS-6325.patch

Here is a patch with a proposed fix to introduce 
BlockManager#isSufficientlyReplicated and a unit test. 
-This will check that the block is replicated to at least the min of number of 
live DNs and the replication factor of the file. 

The unit test will
1. start 6 DNs
2. create a file with a rep factor of 3
3. Kill the DNs with the block locations of that file. 
4. run append and ensure failure
5. ensure the file is not opened afterwards. 
 

> Append should fail if the last block has insufficient number of replicas
> 
>
> Key: HDFS-6325
> URL: https://issues.apache.org/jira/browse/HDFS-6325
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.2.0
>Reporter: Konstantin Shvachko
>Assignee: Keith Pak
> Attachments: HDFS-6325.patch, HDFS-6325_test.patch, appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)