[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Description: Using 
NativeIO.POSIX.getCacheManipulator().getOperatingSystemPageSize() function 
rather than hard coded block size

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HDFS-0.001.patch
>
>
> Using NativeIO.POSIX.getCacheManipulator().getOperatingSystemPageSize() 
> function rather than hard coded block size



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Attachment: HDFS-0.001.patch

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HDFS-0.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Status: Patch Available  (was: Open)

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HDFS-0.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-0:
--
Assignee: (was: ramtin)

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin reassigned HDFS-0:
-

Assignee: ramtin

> Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC
> -
>
> Key: HDFS-0
> URL: https://issues.apache.org/jira/browse/HDFS-0
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ramtin
>Assignee: ramtin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11110) Hardcoded BLOCK SIZE value of 4096 is not appropriate for PowerPC

2016-11-04 Thread ramtin (JIRA)
ramtin created HDFS-0:
-

 Summary: Hardcoded BLOCK SIZE value of 4096 is not appropriate for 
PowerPC
 Key: HDFS-0
 URL: https://issues.apache.org/jira/browse/HDFS-0
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: ramtin
Assignee: ramtin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-18 Thread ramtin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15289536#comment-15289536
 ] 

ramtin commented on HDFS-10382:
---

Thanks [~aw] for your comment. I think you are right but this patch just tries 
to fix HDFS-4983 problem in reading domain pattern from the configuration and 
even this problem can happen for non-numeric usernames. 


> In WebHDFS numeric usernames do not work with DataNode
> --
>
> Key: HDFS-10382
> URL: https://issues.apache.org/jira/browse/HDFS-10382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-10382.patch
>
>
> Operations like {code:java}curl -i 
> -L"http://:/webhdfs/v1/?user.name=0123=OPEN"{code} that 
> directed to DataNode fail because of not reading the suggested domain pattern 
> from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-10 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-10382:
--
Attachment: HADOOP-10382.patch

> In WebHDFS numeric usernames do not work with DataNode
> --
>
> Key: HDFS-10382
> URL: https://issues.apache.org/jira/browse/HDFS-10382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-10382.patch
>
>
> Operations like {code:java}curl -i 
> -L"http://:/webhdfs/v1/?user.name=0123=OPEN"{code} that 
> directed to DataNode fail because of not reading the suggested domain pattern 
> from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-09 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-10382:
--
Status: Patch Available  (was: Open)

> In WebHDFS numeric usernames do not work with DataNode
> --
>
> Key: HDFS-10382
> URL: https://issues.apache.org/jira/browse/HDFS-10382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: ramtin
>Assignee: ramtin
>
> Operations like {code:java}curl -i 
> -L"http://:/webhdfs/v1/?user.name=0123=OPEN"{code} that 
> directed to DataNode fail because of not reading the suggested domain pattern 
> from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-09 Thread ramtin (JIRA)
ramtin created HDFS-10382:
-

 Summary: In WebHDFS numeric usernames do not work with DataNode
 Key: HDFS-10382
 URL: https://issues.apache.org/jira/browse/HDFS-10382
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Reporter: ramtin
Assignee: ramtin


Operations like {code:java}curl -i 
-L"http://:/webhdfs/v1/?user.name=0123=OPEN"{code} that 
directed to DataNode fail because of not reading the suggested domain pattern 
from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9195) TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk

2015-10-06 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-9195:
-
Attachment: HDFS-9195.001.patch

cachedHomeDirectory generated for RealUser and webhdfs.getHomeDirectory() won't 
provide a new path for ProxyUser.

> TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk
> --
>
> Key: HDFS-9195
> URL: https://issues.apache.org/jira/browse/HDFS-9195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
> Attachments: HDFS-9195.001.patch
>
>
> {quote}
> testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
>   Time elapsed: 1.299 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...ocalhost:44528/user/[Proxy]User> 
> but was:<...ocalhost:44528/user/[Real]User>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:163)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9195) TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk

2015-10-06 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-9195:
-
Assignee: ramtin
  Status: Patch Available  (was: Open)

> TestDelegationTokenForProxyUser.testWebHdfsDoAs fails on trunk
> --
>
> Key: HDFS-9195
> URL: https://issues.apache.org/jira/browse/HDFS-9195
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: ramtin
> Attachments: HDFS-9195.001.patch
>
>
> {quote}
> testWebHdfsDoAs(org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser)
>   Time elapsed: 1.299 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<...ocalhost:44528/user/[Proxy]User> 
> but was:<...ocalhost:44528/user/[Real]User>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser.testWebHdfsDoAs(TestDelegationTokenForProxyUser.java:163)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-1950) Blocks that are under construction are not getting read if the blocks are more than 10. Only complete blocks are read properly.

2015-04-29 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin reassigned HDFS-1950:


Assignee: ramtin  (was: Uma Maheswara Rao G)

 Blocks that are under construction are not getting read if the blocks are 
 more than 10. Only complete blocks are read properly. 
 

 Key: HDFS-1950
 URL: https://issues.apache.org/jira/browse/HDFS-1950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 0.20.205.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramtin
Priority: Blocker
 Attachments: HDFS-1950-2.patch, HDFS-1950.1.patch, 
 hdfs-1950-0.20-append-tests.txt, hdfs-1950-trunk-test.txt, 
 hdfs-1950-trunk-test.txt


 Before going to the root cause lets see the read behavior for a file having 
 more than 10 blocks in append case.. 
 Logic: 
  
 There is prefetch size dfs.read.prefetch.size for the DFSInputStream which 
 has default value of 10 
 This prefetch size is the number of blocks that the client will fetch from 
 the namenode for reading a file.. 
 For example lets assume that a file X having 22 blocks is residing in HDFS 
 The reader first fetches first 10 blocks from the namenode and start reading 
 After the above step , the reader fetches the next 10 blocks from NN and 
 continue reading 
 Then the reader fetches the remaining 2 blocks from NN and complete the write 
 Cause: 
 === 
 Lets see the cause for this issue now... 
 Scenario that will fail is Writer wrote 10+ blocks and a partial block and 
 called sync. Reader trying to read the file will not get the last partial 
 block . 
 Client first gets the 10 block locations from the NN. Now it checks whether 
 the file is under construction and if so it gets the size of the last partial 
 block from datanode and reads the full file 
 However when the number of blocks is more than 10, the last block will not be 
 in the first fetch. It will be in the second or other blocks(last block will 
 be in (num of blocks / 10)th fetch) 
 The problem now is, in DFSClient there is no logic to get the size of the 
 last partial block(as in case of point 1), for the rest of the fetches other 
 than first fetch, the reader will not be able to read the complete data 
 synced...!! 
 also the InputStream.available api uses the first fetched block size to 
 iterate. Ideally this size has to be increased



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1950) Blocks that are under construction are not getting read if the blocks are more than 10. Only complete blocks are read properly.

2015-04-29 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-1950:
-
Assignee: (was: ramtin)

 Blocks that are under construction are not getting read if the blocks are 
 more than 10. Only complete blocks are read properly. 
 

 Key: HDFS-1950
 URL: https://issues.apache.org/jira/browse/HDFS-1950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 0.20.205.0
Reporter: ramkrishna.s.vasudevan
Priority: Blocker
 Attachments: HDFS-1950-2.patch, HDFS-1950.1.patch, 
 hdfs-1950-0.20-append-tests.txt, hdfs-1950-trunk-test.txt, 
 hdfs-1950-trunk-test.txt


 Before going to the root cause lets see the read behavior for a file having 
 more than 10 blocks in append case.. 
 Logic: 
  
 There is prefetch size dfs.read.prefetch.size for the DFSInputStream which 
 has default value of 10 
 This prefetch size is the number of blocks that the client will fetch from 
 the namenode for reading a file.. 
 For example lets assume that a file X having 22 blocks is residing in HDFS 
 The reader first fetches first 10 blocks from the namenode and start reading 
 After the above step , the reader fetches the next 10 blocks from NN and 
 continue reading 
 Then the reader fetches the remaining 2 blocks from NN and complete the write 
 Cause: 
 === 
 Lets see the cause for this issue now... 
 Scenario that will fail is Writer wrote 10+ blocks and a partial block and 
 called sync. Reader trying to read the file will not get the last partial 
 block . 
 Client first gets the 10 block locations from the NN. Now it checks whether 
 the file is under construction and if so it gets the size of the last partial 
 block from datanode and reads the full file 
 However when the number of blocks is more than 10, the last block will not be 
 in the first fetch. It will be in the second or other blocks(last block will 
 be in (num of blocks / 10)th fetch) 
 The problem now is, in DFSClient there is no logic to get the size of the 
 last partial block(as in case of point 1), for the rest of the fetches other 
 than first fetch, the reader will not be able to read the complete data 
 synced...!! 
 also the InputStream.available api uses the first fetched block size to 
 iterate. Ideally this size has to be increased



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-1950) Blocks that are under construction are not getting read if the blocks are more than 10. Only complete blocks are read properly.

2015-04-29 Thread ramtin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramtin updated HDFS-1950:
-
Assignee: Uma Maheswara Rao G

 Blocks that are under construction are not getting read if the blocks are 
 more than 10. Only complete blocks are read properly. 
 

 Key: HDFS-1950
 URL: https://issues.apache.org/jira/browse/HDFS-1950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, namenode
Affects Versions: 0.20.205.0
Reporter: ramkrishna.s.vasudevan
Assignee: Uma Maheswara Rao G
Priority: Blocker
 Attachments: HDFS-1950-2.patch, HDFS-1950.1.patch, 
 hdfs-1950-0.20-append-tests.txt, hdfs-1950-trunk-test.txt, 
 hdfs-1950-trunk-test.txt


 Before going to the root cause lets see the read behavior for a file having 
 more than 10 blocks in append case.. 
 Logic: 
  
 There is prefetch size dfs.read.prefetch.size for the DFSInputStream which 
 has default value of 10 
 This prefetch size is the number of blocks that the client will fetch from 
 the namenode for reading a file.. 
 For example lets assume that a file X having 22 blocks is residing in HDFS 
 The reader first fetches first 10 blocks from the namenode and start reading 
 After the above step , the reader fetches the next 10 blocks from NN and 
 continue reading 
 Then the reader fetches the remaining 2 blocks from NN and complete the write 
 Cause: 
 === 
 Lets see the cause for this issue now... 
 Scenario that will fail is Writer wrote 10+ blocks and a partial block and 
 called sync. Reader trying to read the file will not get the last partial 
 block . 
 Client first gets the 10 block locations from the NN. Now it checks whether 
 the file is under construction and if so it gets the size of the last partial 
 block from datanode and reads the full file 
 However when the number of blocks is more than 10, the last block will not be 
 in the first fetch. It will be in the second or other blocks(last block will 
 be in (num of blocks / 10)th fetch) 
 The problem now is, in DFSClient there is no logic to get the size of the 
 last partial block(as in case of point 1), for the rest of the fetches other 
 than first fetch, the reader will not be able to read the complete data 
 synced...!! 
 also the InputStream.available api uses the first fetched block size to 
 iterate. Ideally this size has to be increased



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)