[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: HDFS-9686.1.patch

Updated the patch.

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.0.patch, HDFS-9686.1.patch
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-27 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: (was: HDFS-9686.patch.0)

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-27 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: HDFS-9686.0.patch

Fixed style.

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.0.patch
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-23 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: (was: HDFS-9686.patch.0)

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-23 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: HDFS-9686.patch.0

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.patch.0
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-23 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: (was: HDFS-9686.patch.0)

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-23 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: HDFS-9686.patch.0

I've fixed the compile error.

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.patch.0
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-22 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-9686:


 Summary: Remove useless boxing/unboxing code (Hadoop HDFS)
 Key: HDFS-9686
 URL: https://issues.apache.org/jira/browse/HDFS-9686
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: performance
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
Priority: Minor


There are lots of places where useless boxing/unboxing occur.
To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-22 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Attachment: HDFS-9686.patch.0

I've attached the initial patch.

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.patch.0
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-22 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9686 started by Kousuke Saruta.

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.patch.0
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9686) Remove useless boxing/unboxing code (Hadoop HDFS)

2016-01-22 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-9686:
-
Status: Patch Available  (was: In Progress)

> Remove useless boxing/unboxing code (Hadoop HDFS)
> -
>
> Key: HDFS-9686
> URL: https://issues.apache.org/jira/browse/HDFS-9686
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: performance
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>Priority: Minor
> Attachments: HDFS-9686.patch.0
>
>
> There are lots of places where useless boxing/unboxing occur.
> To avoid performance issue, let's remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HDFS-5800) Typo: soft-limit for hard-limit in DFSClient

2014-01-19 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta moved HADOOP-10243 to HDFS-5800:
---

Key: HDFS-5800  (was: HADOOP-10243)
Project: Hadoop HDFS  (was: Hadoop Common)

 Typo: soft-limit for hard-limit in DFSClient
 

 Key: HDFS-5800
 URL: https://issues.apache.org/jira/browse/HDFS-5800
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kousuke Saruta
Priority: Trivial

 In DFSClient#renewLease, there is a log message as follows.
 {code}
  LOG.warn(Failed to renew lease for  + clientName +  for 
   + (elapsed/1000) +  seconds (= soft-limit =
   + (HdfsConstants.LEASE_HARDLIMIT_PERIOD/1000) +  seconds.) 
 {code}
 This log message includes soft-limit but, considering the context, I think 
 it's typo for hard-limit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5800) Typo: soft-limit for hard-limit in DFSClient

2014-01-19 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5800:
-

Attachment: HDFS-5800.patch

I've attached a patch for this issue.

 Typo: soft-limit for hard-limit in DFSClient
 

 Key: HDFS-5800
 URL: https://issues.apache.org/jira/browse/HDFS-5800
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kousuke Saruta
Priority: Trivial
 Attachments: HDFS-5800.patch


 In DFSClient#renewLease, there is a log message as follows.
 {code}
  LOG.warn(Failed to renew lease for  + clientName +  for 
   + (elapsed/1000) +  seconds (= soft-limit =
   + (HdfsConstants.LEASE_HARDLIMIT_PERIOD/1000) +  seconds.) 
 {code}
 This log message includes soft-limit but, considering the context, I think 
 it's typo for hard-limit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5800) Typo: soft-limit for hard-limit in DFSClient

2014-01-19 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5800:
-

Assignee: Kousuke Saruta
  Status: Patch Available  (was: Open)

 Typo: soft-limit for hard-limit in DFSClient
 

 Key: HDFS-5800
 URL: https://issues.apache.org/jira/browse/HDFS-5800
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
Priority: Trivial
 Attachments: HDFS-5800.patch


 In DFSClient#renewLease, there is a log message as follows.
 {code}
  LOG.warn(Failed to renew lease for  + clientName +  for 
   + (elapsed/1000) +  seconds (= soft-limit =
   + (HdfsConstants.LEASE_HARDLIMIT_PERIOD/1000) +  seconds.) 
 {code}
 This log message includes soft-limit but, considering the context, I think 
 it's typo for hard-limit.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-16 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13873719#comment-13873719
 ] 

Kousuke Saruta commented on HDFS-5761:
--

Thanks for your comment, Uma.
At first, I thought same as you.
I thought it's good to branch the logic depending on whether checksum type is 
NULL or not.
But, on second thought, BlockPoolSlice should not have logic which depends 
specific checksum algorithm.
How to verify is responsibility of each checksum algorithm.


 DataNode fails to validate integrity for checksum type NULL when DataNode 
 recovers 
 ---

 Key: HDFS-5761
 URL: https://issues.apache.org/jira/browse/HDFS-5761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Attachments: HDFS-5761.patch


 When DataNode is down during writing blocks, the blocks are not filinalized 
 and the next time DataNode recovers, integrity validation will run.
 But if we use NULL for checksum algorithm (we can set NULL to 
 dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
 up. 
 The cause is in BlockPoolSlice#validateIntegrity.
 In the method, there is following code.
 {code}
 long numChunks = Math.min(
   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
   (metaFileLen - crcHeaderLen)/checksumSize);
 {code}
 When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
 be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5761) DataNode fail to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5761:


 Summary: DataNode fail to validate integrity for checksum type 
NULL when DataNode recovers 
 Key: HDFS-5761
 URL: https://issues.apache.org/jira/browse/HDFS-5761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta


When DataNode is down during writing blocks, the blocks are not filinalized and 
the next time DataNode recovers, integrity validation will run.
But if we use NULL for checksum algorithm (we can set NULL to 
dfs.checksum.type), DataNode will fail to validate integrity and cannot be up. 

The cause is in BlockPoolSlice#validateIntegrity.
In the method, there is following code.

{code}
long numChunks = Math.min(
  (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
  (metaFileLen - crcHeaderLen)/checksumSize);
{code}

When we choose NULL checksum, checksumSize is 0 so ArithmeticException will be 
thrown and DataNode cannot be up.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5761:
-

Summary: DataNode fails to validate integrity for checksum type NULL when 
DataNode recovers   (was: DataNode fail to validate integrity for checksum type 
NULL when DataNode recovers )

 DataNode fails to validate integrity for checksum type NULL when DataNode 
 recovers 
 ---

 Key: HDFS-5761
 URL: https://issues.apache.org/jira/browse/HDFS-5761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta

 When DataNode is down during writing blocks, the blocks are not filinalized 
 and the next time DataNode recovers, integrity validation will run.
 But if we use NULL for checksum algorithm (we can set NULL to 
 dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
 up. 
 The cause is in BlockPoolSlice#validateIntegrity.
 In the method, there is following code.
 {code}
 long numChunks = Math.min(
   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
   (metaFileLen - crcHeaderLen)/checksumSize);
 {code}
 When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
 be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5761:
-

Attachment: HDFS-5761.patch

I've attached a patch for this issue.

 DataNode fails to validate integrity for checksum type NULL when DataNode 
 recovers 
 ---

 Key: HDFS-5761
 URL: https://issues.apache.org/jira/browse/HDFS-5761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Attachments: HDFS-5761.patch


 When DataNode is down during writing blocks, the blocks are not filinalized 
 and the next time DataNode recovers, integrity validation will run.
 But if we use NULL for checksum algorithm (we can set NULL to 
 dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
 up. 
 The cause is in BlockPoolSlice#validateIntegrity.
 In the method, there is following code.
 {code}
 long numChunks = Math.min(
   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
   (metaFileLen - crcHeaderLen)/checksumSize);
 {code}
 When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
 be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5761:
-

Status: Patch Available  (was: Open)

 DataNode fails to validate integrity for checksum type NULL when DataNode 
 recovers 
 ---

 Key: HDFS-5761
 URL: https://issues.apache.org/jira/browse/HDFS-5761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Attachments: HDFS-5761.patch


 When DataNode is down during writing blocks, the blocks are not filinalized 
 and the next time DataNode recovers, integrity validation will run.
 But if we use NULL for checksum algorithm (we can set NULL to 
 dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
 up. 
 The cause is in BlockPoolSlice#validateIntegrity.
 In the method, there is following code.
 {code}
 long numChunks = Math.min(
   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
   (metaFileLen - crcHeaderLen)/checksumSize);
 {code}
 When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
 be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5546) race condition crashes hadoop ls -R when directories are moved/removed

2013-11-22 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5546:
-

Attachment: HDFS-5546.1.patch

I've tried to make a patch for this issue.
How do you look that?

 race condition crashes hadoop ls -R when directories are moved/removed
 

 Key: HDFS-5546
 URL: https://issues.apache.org/jira/browse/HDFS-5546
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-5546.1.patch


 This seems to be a rare race condition where we have a sequence of events 
 like this:
 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
 2. someone deletes or moves directory D
 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
 calls DFS#listStatus(D). This throws FileNotFoundException.
 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5546) race condition crashes hadoop ls -R when directories are moved/removed

2013-11-22 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13830366#comment-13830366
 ] 

Kousuke Saruta commented on HDFS-5546:
--

I see, and I will try to modify that.

 race condition crashes hadoop ls -R when directories are moved/removed
 

 Key: HDFS-5546
 URL: https://issues.apache.org/jira/browse/HDFS-5546
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-5546.1.patch


 This seems to be a rare race condition where we have a sequence of events 
 like this:
 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
 2. someone deletes or moves directory D
 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
 calls DFS#listStatus(D). This throws FileNotFoundException.
 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4506) In branch-1, HDFS short circuit fails non-transparently when user does not have unix permissions

2013-11-21 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829474#comment-13829474
 ] 

Kousuke Saruta commented on HDFS-4506:
--

getLocalBLockReader calls BlockReaderLocal.newBlockReader and newBlockReader 
accesses local file. When we access local files with FileInputStream ( and 
subclasses ) without proper permission, FileNotFoundException will be thrown. 
So, can we modify like as follows?

{code}
  private BlockReader getLocalBlockReader(Configuration conf,
  String src, Block blk, TokenBlockTokenIdentifier accessToken,
  DatanodeInfo chosenNode, int socketTimeout, long offsetIntoBlock)
  throws InvalidToken, IOException {
try {
  return BlockReaderLocal.newBlockReader(conf, src, blk, accessToken,
  chosenNode, socketTimeout, offsetIntoBlock, blk.getNumBytes()
  - offsetIntoBlock, connectToDnViaHostname);
-} catch (RemoteException re) {
-  throw re.unwrapRemoteException(InvalidToken.class,
-  AccessControlException.class);
+} catch (FileNotFoundException fe) {
+  throw new AcceccControlException(fe);
+} catch (IOException ie) {
+  throw ie;
 }
   }
{code}

 In branch-1, HDFS short circuit fails non-transparently when user does not 
 have unix permissions
 

 Key: HDFS-4506
 URL: https://issues.apache.org/jira/browse/HDFS-4506
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.1.1
Reporter: Enis Soztutar

 We found a case, where if the short circuit user name is configured 
 correctly, but the user does not have enough permissions in unix, DFS 
 operations fails with IOException, rather than silently failing over through 
 datanode. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5546) race condition crashes hadoop ls -R when directories are moved/removed

2013-11-21 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-5546:


Assignee: Kousuke Saruta

 race condition crashes hadoop ls -R when directories are moved/removed
 

 Key: HDFS-5546
 URL: https://issues.apache.org/jira/browse/HDFS-5546
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Assignee: Kousuke Saruta
Priority: Minor

 This seems to be a rare race condition where we have a sequence of events 
 like this:
 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
 2. someone deletes or moves directory D
 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
 calls DFS#listStatus(D). This throws FileNotFoundException.
 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5546) race condition crashes hadoop ls -R when directories are moved/removed

2013-11-21 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5546:
-

Assignee: (was: Kousuke Saruta)

 race condition crashes hadoop ls -R when directories are moved/removed
 

 Key: HDFS-5546
 URL: https://issues.apache.org/jira/browse/HDFS-5546
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Colin Patrick McCabe
Priority: Minor

 This seems to be a rare race condition where we have a sequence of events 
 like this:
 1. org.apache.hadoop.shell.Ls calls DFS#getFileStatus on directory D.
 2. someone deletes or moves directory D
 3. org.apache.hadoop.shell.Ls calls PathData#getDirectoryContents(D), which 
 calls DFS#listStatus(D). This throws FileNotFoundException.
 4. ls command terminates with FNF



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of Cluster summay in dfshealth.html

2013-11-21 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829760#comment-13829760
 ] 

Kousuke Saruta commented on HDFS-5552:
--

In FSNameSystem#metaSave, there is a similar expression.

{code}
  private void metaSave(PrintWriter out) {
assert hasWriteLock();
long totalInodes = this.dir.totalInodes();
long totalBlocks = this.getBlocksTotal();
out.println(totalInodes +  files and directories,  + totalBlocks
+  blocks =  + (totalInodes + totalBlocks)
+  total filesystem objects);

blockManager.metaSave(out);
  }
{code}

As you can see, files and directories, blocks = total filesystem objects 
means totalInode + totalBlocks = (totalInodes + totalBlocks).
On the other hand, dfshealth.dust.html defines files and directories, blocks = 
total filesystem object(s) means {TotalLoad}(FSNameSystem#getTotalLoad) + 
{BlocksTotal}(FSNameSystem#getBlocksTotal) = 
{FilesTotal}(FSNameSystem#getFilesTotal).

TotalLoad means num of active xceivers and FilesTotal means num of inodes so 
these are different from metaSave's.

 Fix wrong information of Cluster summay in dfshealth.html
 ---

 Key: HDFS-5552
 URL: https://issues.apache.org/jira/browse/HDFS-5552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Haohui Mai
 Attachments: HDFS-5552.000.patch, dfshealth-html.png


 files and directories + blocks = total filesystem object(s). But wrong 
 value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5552) Fix wrong information of Cluster summay in dfshealth.html

2013-11-21 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829762#comment-13829762
 ] 

Kousuke Saruta commented on HDFS-5552:
--

[~wheat9] addressed this issue during I commented.

 Fix wrong information of Cluster summay in dfshealth.html
 ---

 Key: HDFS-5552
 URL: https://issues.apache.org/jira/browse/HDFS-5552
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Haohui Mai
 Attachments: HDFS-5552.000.patch, dfshealth-html.png


 files and directories + blocks = total filesystem object(s). But wrong 
 value is displayed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Status: Open  (was: Patch Available)

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493-branch-1.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5500) Critical datanode threads may terminate silently on uncaught exceptions

2013-11-13 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13822035#comment-13822035
 ] 

Kousuke Saruta commented on HDFS-5500:
--

Hi,

I'm investigating this issue.
When DU#refreshInterval  0, DURefreshThread will run and execute du command 
to the directory(DU#dirPath) onece a refreshInterval millisecond.
So, normally, the value DU#getUsed returns is refreshed onece a refreshInterval 
millisecond.
When we put some files on the directory which DU#dirPath expresses, 
BlockPoolSlicer#getDfsUsed will return the value considering the size of the 
files we put.

But, if DURefreshThread dies because of some uncaught exceptions, we couldn't 
know it and the value BlockPoolSlicer#getDfsUsed returns will  never  be 
updated.

 Critical datanode threads may terminate silently on uncaught exceptions
 ---

 Key: HDFS-5500
 URL: https://issues.apache.org/jira/browse/HDFS-5500
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical

 We've seen refreshUsed (DU) thread disappearing on uncaught exceptions. This 
 can go unnoticed for a long time.  If OOM occurs, more things can go wrong.  
 On one occasion, Timer, multiple refreshUsed and DataXceiverServer thread had 
 terminated.  
 DataXceiverServer catches OutOfMemoryError and sleeps for 30 seconds, but I 
 am not sure it is really helpful. In once case, the thread did it multiple 
 times then terminated. I suspect another OOM was thrown while in a catch 
 block.  As a result, the server socket was not closed and clients hung on 
 connect. If it had at least closed the socket, client-side would have been 
 impacted less.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Status: Patch Available  (was: Open)

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Attachment: HDFS-5493

I've tried to make a patch for this issue.
How is this solution?

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Attachment: HDFS-5493.patch

Sorry, I attached a wrong patch and I've re-submitted.

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Attachment: (was: HDFS-5493)

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Attachment: HDFS-5493-branch-1.patch

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493-branch-1.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Attachment: (was: HDFS-5493.patch)

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493-branch-1.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Status: Patch Available  (was: Open)

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493-branch-1.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5493) DFSClient#DFSInputStream#blockSeekTo may leak socket connection.

2013-11-12 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5493:
-

Status: Open  (was: Patch Available)

 DFSClient#DFSInputStream#blockSeekTo may leak socket connection.
 

 Key: HDFS-5493
 URL: https://issues.apache.org/jira/browse/HDFS-5493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 1.2.1
Reporter: Chris Nauroth
 Attachments: HDFS-5493-branch-1.patch


 {{DFSClient#DFSInputStream#blockSeekTo}} may handle {{IOException}} by 
 refetching a new block access token and then reattempting {{fetchBlockAt}}.  
 However, {{fetchBlockAt}} may then throw its own {{IOException}}.  If this 
 happens, then the method skips calling {{Socket#close}}.  This is likely to 
 manifest as a leak of sockets left in CLOSE_WAIT status.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HDFS-5180) Output the processing time of slow RPC request to node's log

2013-11-11 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-5180:


Assignee: Kousuke Saruta  (was: Shinichi Yamashita)

 Output the processing time of slow RPC request to node's log
 

 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Kousuke Saruta
 Attachments: HDFS-5180.patch, HDFS-5180.patch


 In current trunk, it is output at DEBUG level for the processing time of all 
 RPC requests to log.
 When we treat it by the troubleshooting of the large-scale cluster, it is 
 hard to handle the current implementation.
 Therefore we should set the threshold and output only a slow RPC to node's 
 log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Output the processing time of slow RPC request to node's log

2013-11-11 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5180:
-

Assignee: Shinichi Yamashita  (was: Kousuke Saruta)

 Output the processing time of slow RPC request to node's log
 

 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
 Attachments: HDFS-5180.patch, HDFS-5180.patch


 In current trunk, it is output at DEBUG level for the processing time of all 
 RPC requests to log.
 When we treat it by the troubleshooting of the large-scale cluster, it is 
 hard to handle the current implementation.
 Therefore we should set the threshold and output only a slow RPC to node's 
 log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5370) Typo in Error Message: different between range in condition and range in error message

2013-10-16 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5370:


 Summary: Typo in Error Message:  different between range in 
condition and range in error message
 Key: HDFS-5370
 URL: https://issues.apache.org/jira/browse/HDFS-5370
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


In DFSInputStream#getBlockAt, there is an  if statement with a condition 
using = but the error message says .



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5370) Typo in Error Message: different between range in condition and range in error message

2013-10-16 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5370:
-

Attachment: HDFS-5370.patch

I've attached a patch for this issue.

 Typo in Error Message:  different between range in condition and range in 
 error message
 ---

 Key: HDFS-5370
 URL: https://issues.apache.org/jira/browse/HDFS-5370
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5370.patch


 In DFSInputStream#getBlockAt, there is an  if statement with a condition 
 using = but the error message says .



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5370) Typo in Error Message: different between range in condition and range in error message

2013-10-16 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5370:
-

Status: Patch Available  (was: Open)

 Typo in Error Message:  different between range in condition and range in 
 error message
 ---

 Key: HDFS-5370
 URL: https://issues.apache.org/jira/browse/HDFS-5370
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5370.patch


 In DFSInputStream#getBlockAt, there is an  if statement with a condition 
 using = but the error message says .



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5206) Some unreferred constants in DFSConfigKeys should be cleanuped

2013-09-16 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5206:
-

Assignee: Kousuke Saruta

 Some unreferred constants in DFSConfigKeys should be cleanuped
 --

 Key: HDFS-5206
 URL: https://issues.apache.org/jira/browse/HDFS-5206
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


 There are some constants in DFSConfigKeys.java which are no longer referred 
 from any code.
 Unreferred constants are below.
 DFS_STREAM_BUFFER_SIZE_KEY
 DFS_STREAM_BUFFER_SIZE_DEFAULT
 DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT
 DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT
 DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
 DFS_NAMENODE_HOSTS_KEY
 DFS_NAMENODE_HOSTS_EXCLUDE_KEY
 DFS_HTTPS_ENABLE_DEFAULT
 DFS_DATANODE_HTTPS_DEFAULT_PORT
 DFS_DF_INTERVAL_DEFAULT
 DFS_WEB_UGI_KEY
 I think we should cleanup DFSConfigKeys.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5206) Some unreferred constants in DFSConfigKeys should be cleanuped

2013-09-16 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768567#comment-13768567
 ] 

Kousuke Saruta commented on HDFS-5206:
--

* DFS_STREAM_BUFFER_SIZE_KEY / DFS_STREAM_BUFFER_SIZE_DEFAULT
 These are never used constants pair.

* DFS_NAMENODE_HOSTS_KEY and DFS_NAMENODE_HOSTS_EXCLUDE_KEY
  These are never used constants pair. In HDFS-3209, Eli Collins mentioned.

* DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT / 
DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT / 
DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT / 
DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
  Default values are hard corded although we should use these constants.

* DFS_DATANODE_HTTPS_DEFAULT_PORT 
  This constant is used in DFSConfigKeys.DFS_DATANODE_ADDRESS_DEFAULT 
indirectly so we should keep it in the code.

* DFS_DF_INTERVAL_DEFAULT
  This constant is never used and the property (dfs.df.interval) for which this 
constant defined as default is used in test codes 
(TestDataNodeVolumeFailureReporting.java and 
TestDataNodeVolumeFailureToleration.java) and dfs.df.interval is marked as 
DEPRECATED in o.a.h.c.Configuration.java

* DFS_WEB_UGI_KEY
  This constant is also defined in StaticUserWebFilter.java and the property 
defined by this constant is marked as DEPRECATED.


 Some unreferred constants in DFSConfigKeys should be cleanuped
 --

 Key: HDFS-5206
 URL: https://issues.apache.org/jira/browse/HDFS-5206
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


 There are some constants in DFSConfigKeys.java which are no longer referred 
 from any code.
 Unreferred constants are below.
 DFS_STREAM_BUFFER_SIZE_KEY
 DFS_STREAM_BUFFER_SIZE_DEFAULT
 DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT
 DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT
 DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
 DFS_NAMENODE_HOSTS_KEY
 DFS_NAMENODE_HOSTS_EXCLUDE_KEY
 DFS_HTTPS_ENABLE_DEFAULT
 DFS_DATANODE_HTTPS_DEFAULT_PORT
 DFS_DF_INTERVAL_DEFAULT
 DFS_WEB_UGI_KEY
 I think we should cleanup DFSConfigKeys.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5206) Some unreferred constants in DFSConfigKeys should be cleanuped

2013-09-16 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13768660#comment-13768660
 ] 

Kousuke Saruta commented on HDFS-5206:
--

I suggest as follows.

* DFS_STREAM_BUFFER_SIZE_KEY / DFS_STREAM_BUFFER_SIZE_DEFAULT / 
DFS_NAMENODE_HOSTS_KEY / DFS_NAMENODE_HOSTS_EXCLUDE_KEY
  We should remove these constants from DFSConfigKeys.java

* DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT / DFS_HTTPS_ENABLE_DEFAULT / 
DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
  We should replace hard corded default value with these constants except for 
test codes. In some test codes, there are hard corded default values and each 
default value is different from the others. 

* DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT
  We should replace hard corded default value with these constants except for 
test codes and BackupNode.java. In some test codes, there are hard corded 
default values and each default value is different from the others. 
BackupNode.java refers dfs.namenode.safemode.extension but default value for 
safemode extension is original value (Integer.MAX_VALUE) so I think we should 
define a property dfs.backupnode.safemode.extension and constants 
DFSConfigKeys.DFS_BACKUPNODE_SAFEMODE_EXTENSION_KEY and 
DFSConfigKeys.DFS_BACKUPNODE_SAFEMODE_EXTENSION_DEFAULT.

* DFS_DATANODE_HTTPS_DEFAULT_PORT
  We should keep this constant in DFSConfigKeys.java

* DFS_DF_INTERVAL_DEFAULT
  We should remove this constant from DFSConfigKeys.java and replace 
DFSConfigKeys.DFS_DF_INTERVAL in TestDataNodeVolumeFailureReporting.java and 
TestDataNodeVolumeFailureToleration.java with DFSConfigKeys.FS_DF_INTERFAL_KEY 
and dfs.df.interval should be deprecated by fs.dfs.interval.

* DFS_WEB_UGI_KEY
  We should remove this constant from DFSConfigKeys.java and replace this 
constant with DFSConfigKeys.HADOOP_HTTP_STATIC_USER and dfs.web.ugi should be 
deprecated by hadoop.http.staticuser.user.

 Some unreferred constants in DFSConfigKeys should be cleanuped
 --

 Key: HDFS-5206
 URL: https://issues.apache.org/jira/browse/HDFS-5206
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


 There are some constants in DFSConfigKeys.java which are no longer referred 
 from any code.
 Unreferred constants are below.
 DFS_STREAM_BUFFER_SIZE_KEY
 DFS_STREAM_BUFFER_SIZE_DEFAULT
 DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT
 DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT
 DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
 DFS_NAMENODE_HOSTS_KEY
 DFS_NAMENODE_HOSTS_EXCLUDE_KEY
 DFS_HTTPS_ENABLE_DEFAULT
 DFS_DATANODE_HTTPS_DEFAULT_PORT
 DFS_DF_INTERVAL_DEFAULT
 DFS_WEB_UGI_KEY
 I think we should cleanup DFSConfigKeys.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5206) Some unreferred constants in DFSConfigKeys should be cleanuped

2013-09-13 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5206:


 Summary: Some unreferred constants in DFSConfigKeys should be 
cleanuped
 Key: HDFS-5206
 URL: https://issues.apache.org/jira/browse/HDFS-5206
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


There are some constants in DFSConfigKeys.java which are no longer referred 
from any code.
Unreferred constants are below.

DFS_STREAM_BUFFER_SIZE_KEY
DFS_STREAM_BUFFER_SIZE_DEFAULT
DFS_NAMENODE_SAFEMODE_EXTENSION_DEFAULT
DFS_NAMENODE_REPLICATION_CONSIDERLOAD_DEFAULT
DFS_SERVER_HTTPS_KEYSTORE_RESOURCE_DEFAULT
DFS_NAMENODE_HOSTS_KEY
DFS_NAMENODE_HOSTS_EXCLUDE_KEY
DFS_HTTPS_ENABLE_DEFAULT
DFS_DATANODE_HTTPS_DEFAULT_PORT
DFS_DF_INTERVAL_DEFAULT
DFS_WEB_UGI_KEY

I think we should cleanup DFSConfigKeys.java.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-5072) fuse_dfs: ERROR: could not connect open file fuse_impls_open.c:54

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-5072:


Assignee: Kousuke Saruta

 fuse_dfs: ERROR: could not connect open file fuse_impls_open.c:54
 -

 Key: HDFS-5072
 URL: https://issues.apache.org/jira/browse/HDFS-5072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 1.1.2
 Environment: CentOS 6.4 amd64
Reporter: Guoen Yong
Assignee: Kousuke Saruta

 Here are some command lines on CentOS 6.4
 sudo ./fuse_dfs_wrapper.sh dfs://172.16.0.80:9000 /mnt/hdfs
 sudo -u hadoop bin/hadoop dfs -mkdir /test
 sudo -u hadoop bin/hadoop dfs -chown -R root:root /test
 I can create file and directories from following command lines
 sudo bin/hadoop dfs -copyFromLocal /tmp/vod/* /test
 sudo touch /mnt/hdfs/test/test.txt
 And then I created samba share \\172.16.0.80\hdfs for /mnt/hdfs,
 On window system, go to the share folder \\172.16.0.80\hdfs\test via root 
 user,
 I can create directory, copy files from samba, also can rename file on the 
 samba,
 but when I copy file into samba, it popup one window and said I/O error.
 I checked the /var/log/messages, found
 fuse_dfs: ERROR: could not connect open file fuse_impls_open.c:54
 I'm guess it's a bad build, but wondering if there might be another cause.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5072) fuse_dfs: ERROR: could not connect open file fuse_impls_open.c:54

2013-08-06 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5072:
-

Assignee: (was: Kousuke Saruta)

 fuse_dfs: ERROR: could not connect open file fuse_impls_open.c:54
 -

 Key: HDFS-5072
 URL: https://issues.apache.org/jira/browse/HDFS-5072
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 1.1.2
 Environment: CentOS 6.4 amd64
Reporter: Guoen Yong

 Here are some command lines on CentOS 6.4
 sudo ./fuse_dfs_wrapper.sh dfs://172.16.0.80:9000 /mnt/hdfs
 sudo -u hadoop bin/hadoop dfs -mkdir /test
 sudo -u hadoop bin/hadoop dfs -chown -R root:root /test
 I can create file and directories from following command lines
 sudo bin/hadoop dfs -copyFromLocal /tmp/vod/* /test
 sudo touch /mnt/hdfs/test/test.txt
 And then I created samba share \\172.16.0.80\hdfs for /mnt/hdfs,
 On window system, go to the share folder \\172.16.0.80\hdfs\test via root 
 user,
 I can create directory, copy files from samba, also can rename file on the 
 samba,
 but when I copy file into samba, it popup one window and said I/O error.
 I checked the /var/log/messages, found
 fuse_dfs: ERROR: could not connect open file fuse_impls_open.c:54
 I'm guess it's a bad build, but wondering if there might be another cause.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5056) Backport HDFS-1490 to branch-1: TransferFsImage should timeout

2013-08-02 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5056:


 Summary: Backport HDFS-1490 to branch-1: TransferFsImage should 
timeout
 Key: HDFS-5056
 URL: https://issues.apache.org/jira/browse/HDFS-5056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.3.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 1.3.0


For the same reasons as HDFS-1490, TransferFsImage of branch-1 should timeout.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5062) org.apache.hadoop.fs.Df should not use bash -c

2013-08-02 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13728137#comment-13728137
 ] 

Kousuke Saruta commented on HDFS-5062:
--

Hi Colin,

I think this jira duplicate of HADOOP-9818. 

 org.apache.hadoop.fs.Df should not use bash -c
 --

 Key: HDFS-5062
 URL: https://issues.apache.org/jira/browse/HDFS-5062
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Priority: Minor

 {{org.apache.hadoop.fs.Df}} shouldn't use {{bash -c}}.  The only reason why 
 it is using it right now is to combine stderr and stdout, something that can 
 be done easily from Java.
 {{bash}} may not be present on all systems, so having it as a dependency is 
 undesirable.  There are also potential problems in cases where paths contain 
 shell metacharacters.  Finally, having this code here encourages people to 
 copy this use of bash, potentially introducing more shell injection flaws.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5056) Backport HDFS-1490 to branch-1: TransferFsImage should timeout

2013-08-02 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5056:
-

Assignee: (was: Kousuke Saruta)

 Backport HDFS-1490 to branch-1: TransferFsImage should timeout
 --

 Key: HDFS-5056
 URL: https://issues.apache.org/jira/browse/HDFS-5056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1.3.0
Reporter: Kousuke Saruta
 Fix For: 1.3.0


 For the same reasons as HDFS-1490, TransferFsImage of branch-1 should timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723022#comment-13723022
 ] 

Kousuke Saruta commented on HDFS-5033:
--

Hi Karthik,

I think the message which means Permission Denied should not be displayed on 
the client side from the view point of the security.
Instead, that should be logged on the server side as audit log.

How do you think?

 Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
 to read the source
 ---

 Key: HDFS-5033
 URL: https://issues.apache.org/jira/browse/HDFS-5033
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Karthik Kambatla
Priority: Minor
  Labels: noob

 fs -put/copyFromLocal shows a No such file or directory error when the user 
 doesn't have permissions to read the source file/directory. Saying 
 Permission Denied is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5028) LeaseRenewer throw java.util.ConcurrentModificationException when timeout

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723095#comment-13723095
 ] 

Kousuke Saruta commented on HDFS-5028:
--

Hi zahoyunjiong,
I think your modification may reduce the likelihood of this issue but doesn't 
address essentially.
Instead, how about synchronizing dfsclients?
How do you think?

 LeaseRenewer throw java.util.ConcurrentModificationException when timeout
 -

 Key: HDFS-5028
 URL: https://issues.apache.org/jira/browse/HDFS-5028
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 1.1.0, 2.0.0-alpha
Reporter: zhaoyunjiong
 Fix For: 1.1.3

 Attachments: HDFS-5028-branch-1.1.patch, HDFS-5028.patch


 In LeaseRenewer, when renew() throw SocketTimeoutException, c.abort() will 
 remove one dfsclient from dfsclients. Here will throw a 
 ConcurrentModificationException because dfsclients changed after the iterator 
 created by for(DFSClient c : dfsclients):
 Exception in thread org.apache.hadoop.hdfs.LeaseRenewer$1@75fa1077 
 java.util.ConcurrentModificationException
 at 
 java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
 at java.util.AbstractList$Itr.next(AbstractList.java:343)
 at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:406)
 at 
 org.apache.hadoop.hdfs.LeaseRenewer.access$600(LeaseRenewer.java:69)
 at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:273)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5044:


 Summary: dfs -ls should show the character which means symlink and 
link target
 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0


In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5044:
-

Description: 
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target.
I expect  like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
link_target

  was:
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
link_target


 dfs -ls should show the character which means symlink and link target
 -

 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0


 In current implementation of HDFS, dfs -ls doesn't show the character which 
 means symlink and also doesn't show symlink target.
 I expect  like as follows.
 lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
 link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5044:
-

Attachment: HDFS-5044.patch

I've added an initial patch.

 dfs -ls should show the character which means symlink and link target
 -

 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5044.patch


 In current implementation of HDFS, dfs -ls doesn't show the character which 
 means symlink and also doesn't show symlink target.
 I expect that shows like as follows.
 lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
 link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5044:
-

Description: 
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target.
I expect that shows like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
link_target

  was:
In current implementation of HDFS, dfs -ls doesn't show the character which 
means symlink and also doesn't show symlink target.
I expect  like as follows.

lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
link_target


 dfs -ls should show the character which means symlink and link target
 -

 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5044.patch


 In current implementation of HDFS, dfs -ls doesn't show the character which 
 means symlink and also doesn't show symlink target.
 I expect that shows like as follows.
 lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
 link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-5044:


Assignee: Kousuke Saruta

 dfs -ls should show the character which means symlink and link target
 -

 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5044.patch


 In current implementation of HDFS, dfs -ls doesn't show the character which 
 means symlink and also doesn't show symlink target.
 I expect that shows like as follows.
 lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
 link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723249#comment-13723249
 ] 

Kousuke Saruta commented on HDFS-5033:
--

Permission Denied lets malicious users know existence of files. I think if 
one doesn't allow to read a file, we shouldn't let them know.

 Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
 to read the source
 ---

 Key: HDFS-5033
 URL: https://issues.apache.org/jira/browse/HDFS-5033
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Karthik Kambatla
Priority: Minor
  Labels: noob

 fs -put/copyFromLocal shows a No such file or directory error when the user 
 doesn't have permissions to read the source file/directory. Saying 
 Permission Denied is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5033) Bad error message for fs -put/copyFromLocal if user doesn't have permissions to read the source

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723272#comment-13723272
 ] 

Kousuke Saruta commented on HDFS-5033:
--

I think the behavior should be same as the local file system too.
I agree with you.

 Bad error message for fs -put/copyFromLocal if user doesn't have permissions 
 to read the source
 ---

 Key: HDFS-5033
 URL: https://issues.apache.org/jira/browse/HDFS-5033
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Karthik Kambatla
Priority: Minor
  Labels: noob

 fs -put/copyFromLocal shows a No such file or directory error when the user 
 doesn't have permissions to read the source file/directory. Saying 
 Permission Denied is more useful to the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723288#comment-13723288
 ] 

Kousuke Saruta commented on HDFS-5044:
--

Andrew, thank you for letting me know.
I got it and close this jira as a dupulication of HDFS-4019.

 dfs -ls should show the character which means symlink and link target
 -

 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5044.patch


 In current implementation of HDFS, dfs -ls doesn't show the character which 
 means symlink and also doesn't show symlink target.
 I expect that shows like as follows.
 lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
 link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5044) dfs -ls should show the character which means symlink and link target

2013-07-29 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved HDFS-5044.
--

Resolution: Duplicate

 dfs -ls should show the character which means symlink and link target
 -

 Key: HDFS-5044
 URL: https://issues.apache.org/jira/browse/HDFS-5044
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5044.patch


 In current implementation of HDFS, dfs -ls doesn't show the character which 
 means symlink and also doesn't show symlink target.
 I expect that shows like as follows.
 lrwxrwxrwx  1 myuser:mygroup  1024  2013-07-29 /user/myuser/symlink - 
 link_target

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5007) Replace hard-coded property keys with DFSConfigKeys fields

2013-07-19 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713361#comment-13713361
 ] 

Kousuke Saruta commented on HDFS-5007:
--

Jing, thank you for committing!

 Replace hard-coded property keys with DFSConfigKeys fields
 --

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
Priority: Minor
 Fix For: 2.1.0-beta

 Attachments: HDFS-5007.patch, HDFS-5007.patch, HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4278) Log an ERROR when DFS_BLOCK_ACCESS_TOKEN_ENABLE config is disabled but security is turned on.

2013-07-19 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713395#comment-13713395
 ] 

Kousuke Saruta commented on HDFS-4278:
--

Thank you for your advice, Harsh!
I keep in mind.

 Log an ERROR when DFS_BLOCK_ACCESS_TOKEN_ENABLE config  is disabled but 
 security is turned on.
 --

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-4278.patch, HDFS-4278.patch, HDFS-4278.patch, 
 HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-07-19 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713815#comment-13713815
 ] 

Kousuke Saruta commented on HDFS-4983:
--

Hi Harsh,

I understood what you said.
It seems to that there is httpfs.user.provider.user.pattern for free-er regex 
for HttpFs but there is not any property for that for WebHDFS.

So, can we add a property like webhdfs.user.provider.user.pattern ?

 Numeric usernames do not work with WebHDFS FS
 -

 Key: HDFS-4983
 URL: https://issues.apache.org/jira/browse/HDFS-4983
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J

 Per the file 
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
 Given this, using a username such as 123 seems to fail for some reason 
 (tried on insecure setup):
 {code}
 [123@host-1 ~]$ whoami
 123
 [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
 -ls: Invalid value: 123 does not belong to the domain 
 ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4983) Numeric usernames do not work with WebHDFS FS

2013-07-18 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13712115#comment-13712115
 ] 

Kousuke Saruta commented on HDFS-4983:
--

Hi Harsh,
I think the reason is that USER_PATTERN_DEFAULT field in UserProvider is set to 
^[A-Za-z_][A-Za-z0-9._-]*[$]?$.
But some authentication systems like shadow allow less restricted user name.

 Numeric usernames do not work with WebHDFS FS
 -

 Key: HDFS-4983
 URL: https://issues.apache.org/jira/browse/HDFS-4983
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 2.0.0-alpha
Reporter: Harsh J

 Per the file 
 hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java,
  the DOMAIN pattern is set to: {{^[A-Za-z_][A-Za-z0-9._-]*[$]?$}}.
 Given this, using a username such as 123 seems to fail for some reason 
 (tried on insecure setup):
 {code}
 [123@host-1 ~]$ whoami
 123
 [123@host-1 ~]$ hadoop fs -fs webhdfs://host-2.domain.com -ls /
 -ls: Invalid value: 123 does not belong to the domain 
 ^[A-Za-z_][A-Za-z0-9._-]*[$]?$
 Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [path ...]
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-18 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13712136#comment-13712136
 ] 

Kousuke Saruta commented on HDFS-5007:
--

Thank you for your comment.
I will close HDFS-5002 and re-write a patch for removing.

 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5002:
-

Status: Open  (was: Patch Available)

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta resolved HDFS-5002.
--

Resolution: Implemented

This issue will be implemented in HDFS-5007.

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5007:
-

Attachment: HDFS-5007.patch

I explored unused/duplicated fields in DFSConfigKeys and I found only 
DFS_HTTPS_PORT is unused and there are no duplicated fields except for 
dfs.https.port.

 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch, HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5007:
-

Status: Open  (was: Patch Available)

 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch, HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5007:
-

Attachment: HDFS-5007.patch

Thanks for adding me as a contributor and checking my patch, Jing!
I've added a new patch.

 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch, HDFS-5007.patch, HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5007:
-

Status: Patch Available  (was: Open)

 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch, HDFS-5007.patch, HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713281#comment-13713281
 ] 

Kousuke Saruta commented on HDFS-4278:
--

Harsh, do you have any concrete ideas? So far, I think logging a ERROR is more 
reasonable way.

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
  Labels: newbie
 Attachments: HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5011) '._COPYING_' sufix temp file could prevent from MapReduce job

2013-07-18 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5011:


 Summary:  '._COPYING_' sufix temp file could prevent from 
MapReduce job
 Key: HDFS-5011
 URL: https://issues.apache.org/jira/browse/HDFS-5011
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0


FsShell copy/put creates staging files with '._COPYING_' suffix.  
When we run MapReduce job during being the temp file, the job could fail 
because the temp file is not invisible from MapReduce job.
Now, I suggest to rename the temp file with '_' suffix such like '_*._COPYING_'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5011) '._COPYING_' sufix temp file could prevent from running MapReduce job

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5011:
-

Summary:  '._COPYING_' sufix temp file could prevent from running MapReduce 
job  (was:  '._COPYING_' sufix temp file could prevent from MapReduce job)

  '._COPYING_' sufix temp file could prevent from running MapReduce job
 --

 Key: HDFS-5011
 URL: https://issues.apache.org/jira/browse/HDFS-5011
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0


 FsShell copy/put creates staging files with '._COPYING_' suffix.  
 When we run MapReduce job during being the temp file, the job could fail 
 because the temp file is not invisible from MapReduce job.
 Now, I suggest to rename the temp file with '_' suffix such like 
 '_*._COPYING_'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5011) '._COPYING_' sufix temp file could prevent from running MapReduce job

2013-07-18 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13713299#comment-13713299
 ] 

Kousuke Saruta commented on HDFS-5011:
--

Sorry, I've written wrong comment.
the suffix is added in HADOOP-7771

  '._COPYING_' sufix temp file could prevent from running MapReduce job
 --

 Key: HDFS-5011
 URL: https://issues.apache.org/jira/browse/HDFS-5011
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Assignee: Kousuke Saruta
 Fix For: 3.0.0


 FsShell copy/put creates staging files with '._COPYING_' suffix.  
 When we run MapReduce job during being the temp file, the job could fail 
 because the temp file is not invisible from MapReduce job.
 Now, I suggest to rename the temp file with '_' suffix such like 
 '_*._COPYING_'.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta reassigned HDFS-4278:


Assignee: Kousuke Saruta

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-4278:
-

Attachment: HDFS-4278.patch

I've attached a new patch for logging error.

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HDFS-4278.patch, HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-4278:
-

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HDFS-4278.patch, HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-4278:
-

Status: Open  (was: Patch Available)

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HDFS-4278.patch, HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-4278:
-

Attachment: HDFS-4278.patch

Thank you for your comment, Harsh!
I've improved the error message.

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HDFS-4278.patch, HDFS-4278.patch, HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-18 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-4278:
-

Status: Patch Available  (was: Open)

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Kousuke Saruta
  Labels: newbie
 Attachments: HDFS-4278.patch, HDFS-4278.patch, HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-17 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13710903#comment-13710903
 ] 

Kousuke Saruta commented on HDFS-5002:
--

No new tests added because this only removed a field which is no longer refered.

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-17 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13711893#comment-13711893
 ] 

Kousuke Saruta commented on HDFS-5002:
--

Hi Jing,

Thank you for your comment!
the newer field DFS_NAMENODE_HTTPS_PORT_KEY in DFSConfigKeys are also set to 
dfs.https.port.
So, I think we should do two things.

1. Removing older field DFS_HTTPS_PORT_KEY.
2. Replacing the dfs.https.port which is in that two java files into 
DFS_NAMENOdE_HTTPS_PORT_KEY

Or, should I just only replace without removing?

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-17 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5007:


 Summary: the property key dfs.http.port and dfs.https.port are 
hard corded
 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0


In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
key dfs.http.port and dfs.https.port are hard corded.
Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-17 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13712032#comment-13712032
 ] 

Kousuke Saruta commented on HDFS-5002:
--

Sure.
I also create a jira HDFS-5007 to replace hard corded property key and I keep 
the patch for removing.

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-17 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5007:
-

Status: Patch Available  (was: Open)

 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-17 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5007:
-

Attachment: HDFS-5007.patch

I found not only dfs.http.port and dfs.https.port property keys but also 
following property keys are hard corded although DFSconfigKeys has fields for 
those.

dfs.https.server.keystore.resource
dfs.https.enable
dfs.datanode.https.address
datanode.https.port

So, the patch I attached includes modification for those.


 the property key dfs.http.port and dfs.https.port are hard corded
 -

 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0

 Attachments: HDFS-5007.patch


 In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
 key dfs.http.port and dfs.https.port are hard corded.
 Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
 DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
 replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3994) misleading comment in CommonConfigurationKeysPublic

2013-07-16 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13710706#comment-13710706
 ] 

Kousuke Saruta commented on HDFS-3994:
--

Colin, do you mean it' wrong to use CommonConfigurationKeys and we should use a 
class extending CommonConfigurationKeysPublic right?

 misleading comment in CommonConfigurationKeysPublic
 ---

 Key: HDFS-3994
 URL: https://issues.apache.org/jira/browse/HDFS-3994
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Priority: Trivial

 {{CommonConfigurationKeysPublic}} contains a potentially misleading comment:
 {code}
 /** 
  * This class contains constants for configuration keys used
  * in the common code.
  *
  * It includes all publicly documented configuration keys. In general
  * this class should not be used directly (use CommonConfigurationKeys
  * instead)
  */
 {code}
 This comment suggests that the user use {{CommonConfigurationKeys}}, despite 
 the fact that that class is {{InterfaceAudience.private}} whereas 
 {{CommonConfigurationKeysPublic}} is {{InterfaceAudience.public}}.  Perhaps 
 this should be rephrased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-16 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5002:


 Summary: DFS_HTTPS_PORT_KEY is no longer refered
 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is no 
longer referenced by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-16 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5002:
-

Description: In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field 
although it is no longer refered by any code.  (was: In DFSConfigKeys of trunk, 
there is DFS_HTTPS_PORT_KEY field although it is no longer referenced by any 
code.)

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-16 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5002:
-

Attachment: HDFS-5002.patch

I have attached a patch for trunk.

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5002) DFS_HTTPS_PORT_KEY is no longer refered

2013-07-16 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5002:
-

Status: Patch Available  (was: Open)

 DFS_HTTPS_PORT_KEY is no longer refered
 ---

 Key: HDFS-5002
 URL: https://issues.apache.org/jira/browse/HDFS-5002
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-5002.patch


 In DFSConfigKeys of trunk, there is DFS_HTTPS_PORT_KEY field although it is 
 no longer refered by any code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-07 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-4278:
-

Attachment: HDFS-4278.patch

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
  Labels: newbie
 Attachments: HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4278) The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on when security is enabled.

2013-07-07 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701787#comment-13701787
 ] 

Kousuke Saruta commented on HDFS-4278:
--

OK. I will try to modify.

 The DFS_BLOCK_ACCESS_TOKEN_ENABLE config should be automatically turned on 
 when security is enabled.
 

 Key: HDFS-4278
 URL: https://issues.apache.org/jira/browse/HDFS-4278
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
  Labels: newbie
 Attachments: HDFS-4278.patch


 When enabling security, one has to manually enable the config 
 DFS_BLOCK_ACCESS_TOKEN_ENABLE (dfs.block.access.token.enable). Since these 
 two are coupled, we could make it turn itself on automatically if we find 
 security to be enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-14 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683796#comment-13683796
 ] 

Kousuke Saruta commented on HDFS-4888:
--

Ravi, I agree with you. I think it is important to print message properly in 
view of inspection or trouble shooting.

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.9
Reporter: Ravi Prakash
Assignee: Ravi Prakash
 Attachments: HDFS-4888.patch, HDFS-4888.patch


 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4888) Refactor and fix FSNamesystem.getTurnOffTip to sanity

2013-06-10 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13679775#comment-13679775
 ] 

Kousuke Saruta commented on HDFS-4888:
--

[~raviprak], could you give us more details?

 Refactor and fix FSNamesystem.getTurnOffTip to sanity
 -

 Key: HDFS-4888
 URL: https://issues.apache.org/jira/browse/HDFS-4888
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.4-alpha, 0.23.8
Reporter: Ravi Prakash
Assignee: Ravi Prakash

 e.g. When resources are low, the command to leave safe mode is not printed.
 This method is unnecessarily complex

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4897) Copying a directory does not include the directory's quota settings

2013-06-10 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13679834#comment-13679834
 ] 

Kousuke Saruta commented on HDFS-4897:
--

In 3.0.0 implementation, cp -p command preserves only mtime, atime, owner, 
group, permisson of files.
I think that as well as quota, other attributes should be preserved by cp -p 
command. So, I think that adding preserveAttributes(Path src, Path dst) is good 
idea.

{code}
 protected void copyFileToTarget(PathData src, PathData target) throws 
IOException {
src.fs.setVerifyChecksum(verifyChecksum);
InputStream in = null;
try {
  in = src.fs.open(src.path);
  copyStreamToTarget(in, target);
  if(preserve) {
target.fs.setTimes(
  target.path,
  src.stat.getModificationTime(),
  src.stat.getAccessTime());
target.fs.setOwner(
  target.path,
  src.stat.getOwner(),
  src.stat.getGroup());
target.fs.setPermission(
  target.path,
  src.stat.getPermission());
  }
} finally {
  IOUtils.closeStream(in);
}
  }
{code}

 Copying a directory does not include the directory's quota settings
 ---

 Key: HDFS-4897
 URL: https://issues.apache.org/jira/browse/HDFS-4897
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Stephen Chu
  Labels: quota

 [~atm] and I found that when a directory is copied, its quotas settings 
 aren't included.
 {code}
 [04:21:33] atm@simon:~/src/apache/hadoop.git$ hadoop fs -ls /user
 Found 2 items
 drwxr-xr-x   - atm  atm 0 2013-06-07 16:17 /user/atm
 drwx--   - hdfs supergroup  0 2013-06-07 16:21 /user/hdfs
 [04:21:44] atm@simon:~/src/apache/hadoop.git$ hadoop fs -count -q /user/atm
  100  91none inf7 
2   3338 /user/atm
 [04:21:51] atm@simon:~/src/apache/hadoop.git$ sudo -u hdfs -E `which hadoop` 
 fs -cp /user/atm /user/atm-copy
 [04:22:00] atm@simon:~/src/apache/hadoop.git$ hadoop fs -count -q 
 /user/atm-copy
 none infnone inf6 
1   3338 /user/atm-copy
 {code}
 This also means that a user will not retain quotas settings when the user 
 takes snapshots and restores a subtree using snapshots because we use copy 
 (not allowed to move snapshots).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira