[jira] [Updated] (HADOOP-10136) Custom JMX server to avoid random port usage by default JMX Server

2014-02-05 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10136:
---

Attachment: HADOOP-10136.patch

Updated the patch with AbstractService implementation.

 Custom JMX server to avoid random port usage by default JMX Server
 --

 Key: HADOOP-10136
 URL: https://issues.apache.org/jira/browse/HADOOP-10136
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10136.patch, HADOOP-10136.patch


 If any of the java process want to enable the JMX MBean server  then 
 following VM arguments needs to be passed.
 {code}
 -Dcom.sun.management.jmxremote
 -Dcom.sun.management.jmxremote.port=14005
 -Dcom.sun.management.jmxremote.local.only=false
 -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.ssl=false{code}
 But the issue here is this will use one more random port other than 14005 
 while starting JMX. 
 This can be a problem if that random port is used for some other service.
 So support a custom JMX Server through which random port can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-02-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890860#comment-13890860
 ] 

Vinay commented on HADOOP-10251:


Hi, Can someone please review the patch.. Thanks

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-10251.patch, HADOOP-10251.patch, 
 HADOOP-10251.patch, HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9921) daemon scripts should remove pid file on stop call after stop or process is found not running

2014-02-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890864#comment-13890864
 ] 

Vinay commented on HADOOP-9921:
---

Hi, 
Please can someone review the patch. Thanks in advance.

 daemon scripts should remove pid file on stop call after stop or process is 
 found not running
 -

 Key: HADOOP-9921
 URL: https://issues.apache.org/jira/browse/HADOOP-9921
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9921.patch


 daemon scripts should remove the pid file on stop call using daemon script.
 Should remove the pid file, even though process is not running.
 same pid file will be used by start command. At that time, if the same pid is 
 assigned to some other process, then start may fail.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10131) NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if excluded nodes passed are not part of the cluster tree

2014-02-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890866#comment-13890866
 ] 

Vinay commented on HADOOP-10131:


Hi, can someone please take a look at the patch. Thanks in advance.

 NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if 
 excluded nodes passed are not part of the cluster tree
 -

 Key: HADOOP-10131
 URL: https://issues.apache.org/jira/browse/HADOOP-10131
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.5-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10131.patch, HDFS-5112.patch


 I got File /hdfs_COPYING_ could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
 excluded in this operation. in the following case
 1. 2 DNs cluster,
 2. One of the datanodes was not responding from last 10 min, but about to 
 detect as dead at NN.
 3. Tried to write one file, for the block NN allocated both DNs.
 4. Client While creating the pipeline took some time to detect one node 
 failure.
 5. Before client detects pipeline failure, NN side dead node was removed from 
 cluster map.
 6. Now, client has abandoned previous block and asked for new block with dead 
 node in excluded list and got above exception even though one more node was 
 available live.
 When I dig this more, found that,
 {{NetWorkTopology#countNumOfAvailableNodes()}} is not giving correct count 
 when the excludeNodes passed from client are not part of the cluster map.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10115) Exclude duplicate jars in hadoop package under different component's lib

2014-02-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890870#comment-13890870
 ] 

Vinay commented on HADOOP-10115:


Hi all,
Can some one take a look at the patch.
Thanks in advance

 Exclude duplicate jars in hadoop package under different component's lib
 

 Key: HADOOP-10115
 URL: https://issues.apache.org/jira/browse/HADOOP-10115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.2.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10115.patch, HADOOP-10115.patch, 
 HADOOP-10115.patch


 In the hadoop package distribution there are more than 90% of the jars are 
 duplicated in multiple places.
 For Ex:
 almost all jars in share/hadoop/hdfs/lib are already there in 
 share/hadoop/common/lib
 Same case for all other lib in share directory.
 Anyway for all the daemon processes all directories are added to classpath.
 So to reduce the package distribution size and the classpath overhead, remove 
 the duplicate jars from the distribution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2014-02-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890872#comment-13890872
 ] 

Vinay commented on HADOOP-9905:
---

Hi All,
Any changes required on this jira..?
If no changes required can push this in..?
Thanks

 remove dependency of zookeeper for hadoop-client
 

 Key: HADOOP-9905
 URL: https://issues.apache.org/jira/browse/HADOOP-9905
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9905.patch


 zookeeper dependency was added for ZKFC, which will not be used by client.
 Better remove the dependency of zookeeper jar for hadoop-client



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10136) Custom JMX server to avoid random port usage by default JMX Server

2014-02-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13890885#comment-13890885
 ] 

Vinay commented on HADOOP-10136:


bq. In this case we can change the default port right ?
We could change default ports to some other range. But these ports are there 
from very long time. I feel changing needs to be mentioned as incompatible.

{quote}In case of JMX even if we need to configure it is not possible.
So i think better to keep this JMX server as an option.{quote}
Yes. 

 Custom JMX server to avoid random port usage by default JMX Server
 --

 Key: HADOOP-10136
 URL: https://issues.apache.org/jira/browse/HADOOP-10136
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10136.patch


 If any of the java process want to enable the JMX MBean server  then 
 following VM arguments needs to be passed.
 {code}
 -Dcom.sun.management.jmxremote
 -Dcom.sun.management.jmxremote.port=14005
 -Dcom.sun.management.jmxremote.local.only=false
 -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.ssl=false{code}
 But the issue here is this will use one more random port other than 14005 
 while starting JMX. 
 This can be a problem if that random port is used for some other service.
 So support a custom JMX Server through which random port can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10277) setfacl -x fails to parse ACL spec if trying to remove the mask entry.

2014-01-26 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13882560#comment-13882560
 ] 

Vinay commented on HADOOP-10277:


Thanks Chris for clearing the doubt. I will post a patch soon for refactoring 
as well as the fix.

 setfacl -x fails to parse ACL spec if trying to remove the mask entry.
 --

 Key: HADOOP-10277
 URL: https://issues.apache.org/jira/browse/HADOOP-10277
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-10277.1.patch


 You should be able to use setfacl -x to remove the mask entry (if also 
 removing all other extended ACL entries).  Right now, this causes a failure 
 to parse the ACL spec due to a bug in {{AclEntry#parseAclSpec}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HADOOP-10277) setfacl -x fails to parse ACL spec if trying to remove the mask entry.

2014-01-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay reassigned HADOOP-10277:
--

Assignee: Vinay  (was: Chris Nauroth)

 setfacl -x fails to parse ACL spec if trying to remove the mask entry.
 --

 Key: HADOOP-10277
 URL: https://issues.apache.org/jira/browse/HADOOP-10277
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10277.1.patch, HADOOP-10277.patch


 You should be able to use setfacl -x to remove the mask entry (if also 
 removing all other extended ACL entries).  Right now, this causes a failure 
 to parse the ACL spec due to a bug in {{AclEntry#parseAclSpec}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10277) setfacl -x fails to parse ACL spec if trying to remove the mask entry.

2014-01-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10277:
---

Attachment: HADOOP-10277.patch

Hi Chris, Attaching the refractored patch. 
Its almost same as your initial work. 
Please review.

 setfacl -x fails to parse ACL spec if trying to remove the mask entry.
 --

 Key: HADOOP-10277
 URL: https://issues.apache.org/jira/browse/HADOOP-10277
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-10277.1.patch, HADOOP-10277.patch


 You should be able to use setfacl -x to remove the mask entry (if also 
 removing all other extended ACL entries).  Right now, this causes a failure 
 to parse the ACL spec due to a bug in {{AclEntry#parseAclSpec}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10277) setfacl -x fails to parse ACL spec if trying to remove the mask entry.

2014-01-25 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13882181#comment-13882181
 ] 

Vinay commented on HADOOP-10277:


Hi Chris,
I got the issue. 
Issue was {{String.split()}} will not add empty trailing strings to parsed 
array. 
This can be fixed.

But my doubt is, whether removing the mask is allowed ?

My setfacl throws error when I try to remove mask entry in my suse linux box
{noformat}vinay@host-10-18-40-99:~ setfacl -x mask:: testAcl/
setfacl: testAcl/: Malformed access ACL 
`user::rwx,user:vinay:r--,group::r-x,group:users:r-x,other::r-x': Missing or 
wrong entry at entry 5
vinay@host-10-18-40-99:~ setfacl -x mask testAcl/
setfacl: testAcl/: Malformed access ACL 
`user::rwx,user:vinay:r--,group::r-x,group:users:r-x,other::r-x': Missing or 
wrong entry at entry 5{noformat}


Please validate whether this is correct behaviour or whether we need to support 
removal of mask entries

 setfacl -x fails to parse ACL spec if trying to remove the mask entry.
 --

 Key: HADOOP-10277
 URL: https://issues.apache.org/jira/browse/HADOOP-10277
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 You should be able to use setfacl -x to remove the mask entry (which then 
 triggers recalculation of an automatically inferred mask if the file has an 
 extended ACL).  Right now, this causes a failure to parse the ACL spec due to 
 a bug in {{AclEntry#parseAclSpec}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)
Vinay created HADOOP-10251:
--

 Summary: Both NameNodes could be in STANDBY State if SNN network 
is unstable
 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical


Following corner scenario happened in one of our cluster.

1. NN1 was Active and NN2 was Standby
2. NN2 machine's network was slow 
3. NN1 got shutdown.
4. NN2 ZKFC got the notification and trying to check for old active for 
fencing. (This took little more time, again due to slow network)
5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made it 
Active.
6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
STANBY.
7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and got 
shutdown before making NN2 Active.


*Now cluster having both NameNodes as STANDBY.*
NN1 ZKFC still thinks that its nameNode is in Active state. 
NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13878576#comment-13878576
 ] 

Vinay commented on HADOOP-10251:


ZKFC health check, checks the state of the NameNode, but it doesnot validate it 
with expected state.


 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical

 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10251:
---

Attachment: HADOOP-10251.patch

Attaching a patch for the above case. Please review

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10251:
---

Status: Patch Available  (was: Open)

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10251:
---

Attachment: HADOOP-10251.patch

Updated the patch to fix test failures.
These are mainly time issues.

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-10251.patch, HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10251:
---

Attachment: HADOOP-10251.patch

Updated the patch again.
Also enabled tests in Windows.

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-10251.patch, HADOOP-10251.patch, 
 HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10251) Both NameNodes could be in STANDBY State if SNN network is unstable

2014-01-22 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10251:
---

Attachment: HADOOP-10251.patch

Attaching the updated patch, fixed some synchronization issue.

 Both NameNodes could be in STANDBY State if SNN network is unstable
 ---

 Key: HADOOP-10251
 URL: https://issues.apache.org/jira/browse/HADOOP-10251
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.2.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-10251.patch, HADOOP-10251.patch, 
 HADOOP-10251.patch, HADOOP-10251.patch


 Following corner scenario happened in one of our cluster.
 1. NN1 was Active and NN2 was Standby
 2. NN2 machine's network was slow 
 3. NN1 got shutdown.
 4. NN2 ZKFC got the notification and trying to check for old active for 
 fencing. (This took little more time, again due to slow network)
 5. In between, NN1 got restarted by our automatic monitoring, and ZKFC made 
 it Active.
 6. Now NN2 ZKFC got Old Active as NN2 and it did graceful fencing of NN1 to 
 STANBY.
 7. Before writing ActiveBreadCrumb to ZK, NN2 ZKFC got session timeout and 
 got shutdown before making NN2 Active.
 *Now cluster having both NameNodes as STANDBY.*
 NN1 ZKFC still thinks that its nameNode is in Active state. 
 NN2 ZKFC waiting for election.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10213) Fix bugs parsing ACL spec in FsShell setfacl.

2014-01-20 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877168#comment-13877168
 ] 

Vinay commented on HADOOP-10213:


Thanks Chris for the detailed reviews and commit.

 Fix bugs parsing ACL spec in FsShell setfacl.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Fix For: HDFS ACLs (HDFS-4685)

 Attachments: HADOOP-10213.patch, HADOOP-10213.patch, 
 HADOOP-10213.patch, HADOOP-10213.patch, HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10213) Fix bugs parsing ACL spec in FsShell setfacl.

2014-01-19 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10213:
---

Attachment: HADOOP-10213.patch

Attaching the updated patch

 Fix bugs parsing ACL spec in FsShell setfacl.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch, HADOOP-10213.patch, 
 HADOOP-10213.patch, HADOOP-10213.patch, HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-18 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10213:
---

Attachment: HADOOP-10213.patch

Thanks Chris for checking. 
Corrected all your comments and attached the patch. Please review

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch, HADOOP-10213.patch, 
 HADOOP-10213.patch, HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-16 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10213:
---

Attachment: HADOOP-10213.patch

Reverted the changes in AclEntry equals() and hashCode().
Sorry for the noise. : )
Please review

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch, HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-16 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10213:
---

Attachment: HADOOP-10213.patch

Attached the refactored patch to separate parse logic to static method in 
AclEntry.
Please review

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch, HADOOP-10213.patch, 
 HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-14 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871616#comment-13871616
 ] 

Vinay commented on HADOOP-10213:


Ok. If the validation of the acl is already done for duplicate then no problem. 
Only thing I am concerned about is while implementation removal of acl entry 
proper ack entry should be found regardless of the permission. 
I will post a patch soon by removing the modifications to AclEntry

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-13 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870382#comment-13870382
 ] 

Vinay commented on HADOOP-10213:


bq. Hi, Vinay. This looks good, but I think we'll need to revert the AclEntry 
portion of the change. There are various unit tests that rely on assertEquals 
or assertArrayEquals to check that the correct ACL was applied to a file. With 
this change, those assertEquals calls would pass even if the permissions inside 
the ACL entries were incorrect. Even putting aside tests, this is a public 
user-facing class, and callers likely would find it surprising if 
user:bruce:rwx and user:bruce:--- were considered equal.
With this reason only I have earlier included permissions also from command 
line for -x.
But in this case, say permissions are not passed from the commandline, but the 
ACLEntries contain ACL for same user/group with some permissions. In this case, 
permissions will differ and objects also will differ.
In general, there will be only one ACL entry per user/group in each type no 
matter what are the permissions. I agree that we cannot consider 
user:bruce:rwx and user:bruce:--- as equal, but also both these entries 
cannot be present in list of ACL entries right?

So my preference is that we need to check for permissions separately whenever 
necessary, instead of including in equals() and hashCode(). 
What you say?

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-09 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866531#comment-13866531
 ] 

Vinay commented on HADOOP-10213:


Please review

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-09 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10213:
---

Attachment: HADOOP-10213.patch

Attached the patch for validation for -x.

Also removed the permissions from equals() and hashCode() as AclEntry always 
may not carry permissions.

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10213.patch


 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-08 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay reassigned HADOOP-10213:
--

Assignee: Vinay

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay

 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10213) setfacl -x should reject attempts to include permissions in the ACL spec.

2014-01-08 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866298#comment-13866298
 ] 

Vinay commented on HADOOP-10213:


Thanks chris. I will make necessary changes and post a patch soon.. 

 setfacl -x should reject attempts to include permissions in the ACL spec.
 -

 Key: HADOOP-10213
 URL: https://issues.apache.org/jira/browse/HADOOP-10213
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth

 When calling setfacl -x to remove ACL entries, it does not make sense for the 
 entries in the ACL spec to contain permissions.  The permissions should be 
 unspecified, and the CLI should return an error if the user attempts to 
 provide permissions.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10187) FsShell CLI: add getfacl and setfacl with minimal support for getting and setting ACLs.

2013-12-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10187:
---

Attachment: HADOOP-10187.patch

Updated the patch
bq. FsCommand: Let's mention in the JavaDoc that it returns null if not found.
I didn't get this. Did you mean {{FsAction#getFsAction()}}? I have updated the 
Javadoc for this.

Also moved TestAclCommands to common code. 
End-to-end xml-based test will be added in hdfs later. I will file a Jira for 
this. 

 FsShell CLI: add getfacl and setfacl with minimal support for getting and 
 setting ACLs.
 ---

 Key: HADOOP-10187
 URL: https://issues.apache.org/jira/browse/HADOOP-10187
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10187.patch, HDFS-5600.patch, HDFS-5600.patch, 
 HDFS-5600.patch


 Implement and test FsShell CLI commands for getfacl and setfacl.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10187) FsShell CLI: add getfacl and setfacl with minimal support for getting and setting ACLs.

2013-12-26 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13857349#comment-13857349
 ] 

Vinay commented on HADOOP-10187:


Filed HDFS-5702 to track e2e tests

 FsShell CLI: add getfacl and setfacl with minimal support for getting and 
 setting ACLs.
 ---

 Key: HADOOP-10187
 URL: https://issues.apache.org/jira/browse/HADOOP-10187
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: tools
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Assignee: Vinay
 Attachments: HADOOP-10187.patch, HDFS-5600.patch, HDFS-5600.patch, 
 HDFS-5600.patch


 Implement and test FsShell CLI commands for getfacl and setfacl.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-12-09 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9867:
--

Attachment: HADOOP-9867.patch

Attaching the updated patch based on HADOOP-9622 changes

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2, 0.23.9, 2.2.0
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-9867.patch, HADOOP-9867.patch, HADOOP-9867.patch


 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-08 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10101:
---

Attachment: HADOOP-10101.patch

Attaching the same patch as in HDFS-5518, with Hadoop QA overall +1.

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
 Attachments: HADOOP-10101-002.patch, HADOOP-10101.patch, 
 HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-08 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay reassigned HADOOP-10101:
--

Assignee: Vinay

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101.patch, 
 HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-08 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842852#comment-13842852
 ] 

Vinay commented on HADOOP-10101:


Hi [~ste...@apache.org], Could you take a look at changes...?
Thanks,

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
Assignee: Vinay
 Attachments: HADOOP-10101-002.patch, HADOOP-10101.patch, 
 HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-12-06 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13841134#comment-13841134
 ] 

Vinay commented on HADOOP-10101:


Any update on this Jira? 
If no objection, I would like to take-up and replace all deprecated with their 
successors..

 Update guava dependency to the latest version 15.0
 --

 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Rakesh R
 Attachments: HADOOP-10101-002.patch, HADOOP-10101.patch


 The existing guava version is 11.0.2 which is quite old. This 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-06 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10142:
---

Attachment: HADOOP-10142.patch

Attaching the patch for the proposed 
{{hadoop.user.group.static.mapping.overrides}}.
Please review 

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Commented] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-06 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13842010#comment-13842010
 ] 

Vinay commented on HADOOP-10142:


Thanks all.. 

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.3.0

 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Commented] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-05 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13840773#comment-13840773
 ] 

Vinay commented on HADOOP-10142:


Thanks Colin and Andrew. 
I will make unprevileged users as configurable and post a patch soon.

 Reduce the log generated by ShellBasedUnixGroupsMapping
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Updated] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-05 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10142:
---

Summary: Avoid groups lookup for unprivileged users such as dr.who  (was: 
Reduce the log generated by ShellBasedUnixGroupsMapping)

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Updated] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-05 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10142:
---

Attachment: HADOOP-10142.patch

Updated the patch as per comments.

Andrew, I didn't understand your last comment regarding debug log. I removed 
that change itself as we can avoid huge logs by configuration. :)

 Reduce the log generated by ShellBasedUnixGroupsMapping
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Commented] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-05 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13840922#comment-13840922
 ] 

Vinay commented on HADOOP-10142:


Thanks Colin. 
How about hadoop.user.group.static.mapping ? with group static group mappings 
and dr.who mapping to empty groups.. 

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Commented] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-05 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13840930#comment-13840930
 ] 

Vinay commented on HADOOP-10142:


ok. got it . I will try to implement it and post a patch soon

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Commented] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13839133#comment-13839133
 ] 

Vinay commented on HADOOP-10142:


Thanks colin. Your idea make sense.
I will post a patch for that soon.

 Reduce the log generated by ShellBasedUnixGroupsMapping
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 

[jira] [Updated] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-04 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10142:
---

Attachment: HADOOP-10142.patch

Attaching a patch for exclusion of dr.who from group lookup

 Reduce the log generated by ShellBasedUnixGroupsMapping
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 

[jira] [Updated] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-03 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10142:
---

Attachment: HADOOP-10142.patch

Attaching the patch adding trace only in debug mode and just message in warn

 Reduce the log generated by ShellBasedUnixGroupsMapping
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 

[jira] [Updated] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-03 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10142:
---

Status: Patch Available  (was: Open)

 Reduce the log generated by ShellBasedUnixGroupsMapping
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 

[jira] [Updated] (HADOOP-10115) Exclude duplicate jars in hadoop package under different component's lib

2013-12-02 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10115:
---

Attachment: HADOOP-10115.patch

Updated

 Exclude duplicate jars in hadoop package under different component's lib
 

 Key: HADOOP-10115
 URL: https://issues.apache.org/jira/browse/HADOOP-10115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.2.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10115.patch, HADOOP-10115.patch, 
 HADOOP-10115.patch


 In the hadoop package distribution there are more than 90% of the jars are 
 duplicated in multiple places.
 For Ex:
 almost all jars in share/hadoop/hdfs/lib are already there in 
 share/hadoop/common/lib
 Same case for all other lib in share directory.
 Anyway for all the daemon processes all directories are added to classpath.
 So to reduce the package distribution size and the classpath overhead, remove 
 the duplicate jars from the distribution.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10142) Reduce the log generated by ShellBasedUnixGroupsMapping

2013-12-02 Thread Vinay (JIRA)
Vinay created HADOOP-10142:
--

 Summary: Reduce the log generated by ShellBasedUnixGroupsMapping
 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay


Reduce the logs generated by ShellBasedUnixGroupsMapping.
For ex: Using WebHdfs from windows generates following log for each request

{noformat}2013-12-03 11:34:56,589 WARN 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying to 
get groups for user dr.who
org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user

at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
at org.apache.hadoop.util.Shell.run(Shell.java:417)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
at 
org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
at 
org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
at 
org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 

[jira] [Commented] (HADOOP-10136) Custom JMX server to avoid random port usage by default JMX Server

2013-11-29 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13835444#comment-13835444
 ] 

Vinay commented on HADOOP-10136:


Thanks Steve for the input.
bq. Is the port the JMX server coming up on in use?
This is possible. But my concern was not this. Default JMX will use one more 
extra random port along with the port configured via 
*-Dcom.sun.management.jmxremote.port*. 

I will try to implement suggestions you given.

bq. have a way to query the server for the port in use
Do you mean, querying this via RPC..?

 Custom JMX server to avoid random port usage by default JMX Server
 --

 Key: HADOOP-10136
 URL: https://issues.apache.org/jira/browse/HADOOP-10136
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinay
Assignee: Vinay

 If any of the java process want to enable the JMX MBean server  then 
 following VM arguments needs to be passed.
 {code}
 -Dcom.sun.management.jmxremote
 -Dcom.sun.management.jmxremote.port=14005
 -Dcom.sun.management.jmxremote.local.only=false
 -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.ssl=false{code}
 But the issue here is this will use one more random port other than 14005 
 while starting JMX. 
 This can be a problem if that random port is used for some other service.
 So support a custom JMX Server through which random port can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10136) Custom JMX server to avoid random port usage by default JMX Server

2013-11-29 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10136:
---

Attachment: HADOOP-10136.patch

Attaching the initial patch.
Please review and let me know your feedback

 Custom JMX server to avoid random port usage by default JMX Server
 --

 Key: HADOOP-10136
 URL: https://issues.apache.org/jira/browse/HADOOP-10136
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10136.patch


 If any of the java process want to enable the JMX MBean server  then 
 following VM arguments needs to be passed.
 {code}
 -Dcom.sun.management.jmxremote
 -Dcom.sun.management.jmxremote.port=14005
 -Dcom.sun.management.jmxremote.local.only=false
 -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.ssl=false{code}
 But the issue here is this will use one more random port other than 14005 
 while starting JMX. 
 This can be a problem if that random port is used for some other service.
 So support a custom JMX Server through which random port can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10136) Custom JMX server to avoid random port usage by default JMX Server

2013-11-28 Thread Vinay (JIRA)
Vinay created HADOOP-10136:
--

 Summary: Custom JMX server to avoid random port usage by default 
JMX Server
 Key: HADOOP-10136
 URL: https://issues.apache.org/jira/browse/HADOOP-10136
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Vinay
Assignee: Vinay


If any of the java process want to enable the JMX MBean server  then following 
VM arguments needs to be passed.
{code}
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=14005
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false{code}

But the issue here is this will use one more random port other than 14005 while 
starting JMX. 
This can be a problem if that random port is used for some other service.

So support a custom JMX Server through which random port can be avoided.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Moved] (HADOOP-10131) NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if excluded nodes passed are not part of the cluster tree

2013-11-27 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay moved HDFS-5112 to HADOOP-10131:
--

  Component/s: (was: namenode)
Affects Version/s: (was: 2.0.5-alpha)
   (was: 3.0.0)
   3.0.0
   2.0.5-alpha
  Key: HADOOP-10131  (was: HDFS-5112)
  Project: Hadoop Common  (was: Hadoop HDFS)

 NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if 
 excluded nodes passed are not part of the cluster tree
 -

 Key: HADOOP-10131
 URL: https://issues.apache.org/jira/browse/HADOOP-10131
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.5-alpha, 3.0.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5112.patch


 I got File /hdfs_COPYING_ could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
 excluded in this operation. in the following case
 1. 2 DNs cluster,
 2. One of the datanodes was not responding from last 10 min, but about to 
 detect as dead at NN.
 3. Tried to write one file, for the block NN allocated both DNs.
 4. Client While creating the pipeline took some time to detect one node 
 failure.
 5. Before client detects pipeline failure, NN side dead node was removed from 
 cluster map.
 6. Now, client has abandoned previous block and asked for new block with dead 
 node in excluded list and got above exception even though one more node was 
 available live.
 When I dig this more, found that,
 {{NetWorkTopology#countNumOfAvailableNodes()}} is not giving correct count 
 when the excludeNodes passed from client are not part of the cluster map.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10131) NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if excluded nodes passed are not part of the cluster tree

2013-11-27 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10131:
---

Attachment: HADOOP-10131.patch

Attached the updated patch with test.

Please review

 NetWorkTopology#countNumOfAvailableNodes() is returning wrong value if 
 excluded nodes passed are not part of the cluster tree
 -

 Key: HADOOP-10131
 URL: https://issues.apache.org/jira/browse/HADOOP-10131
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.5-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10131.patch, HDFS-5112.patch


 I got File /hdfs_COPYING_ could only be replicated to 0 nodes instead of 
 minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
 excluded in this operation. in the following case
 1. 2 DNs cluster,
 2. One of the datanodes was not responding from last 10 min, but about to 
 detect as dead at NN.
 3. Tried to write one file, for the block NN allocated both DNs.
 4. Client While creating the pipeline took some time to detect one node 
 failure.
 5. Before client detects pipeline failure, NN side dead node was removed from 
 cluster map.
 6. Now, client has abandoned previous block and asked for new block with dead 
 node in excluded list and got above exception even though one more node was 
 available live.
 When I dig this more, found that,
 {{NetWorkTopology#countNumOfAvailableNodes()}} is not giving correct count 
 when the excludeNodes passed from client are not part of the cluster map.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10126) LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB

2013-11-25 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13832170#comment-13832170
 ] 

Vinay commented on HADOOP-10126:


Thanks Suresh.

 LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB
 -

 Key: HADOOP-10126
 URL: https://issues.apache.org/jira/browse/HADOOP-10126
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Fix For: 2.3.0

 Attachments: HADOOP-10126.patch


 Following message log message from LightWeightGSet is confusing.
 {noformat}2013-11-21 18:00:21,198 INFO org.apache.hadoop.util.GSet: 2.0% max 
 memory = 2.0 GB{noformat}, 
 where 2GB is max JVM memory, but log message confuses like 2% of max memory 
 is 2GB. 
 It can be better like this
 2.0% of max memory 2.0 GB = 40.9 MB



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10126) LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB

2013-11-24 Thread Vinay (JIRA)
Vinay created HADOOP-10126:
--

 Summary: LightWeightGSet log message is confusing : 2.0% max 
memory = 2.0 GB
 Key: HADOOP-10126
 URL: https://issues.apache.org/jira/browse/HADOOP-10126
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Vinay
Assignee: Vinay
Priority: Minor


Following message log message from LightWeightGSet is confusing.
{noformat}2013-11-21 18:00:21,198 INFO org.apache.hadoop.util.GSet: 2.0% max 
memory = 2.0 GB{noformat}, 
where 2GB is max JVM memory, but log message confuses like 2% of max memory is 
2GB. 

It can be better like this
2.0% of max memory 2.0 GB = 40.9 MB



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10126) LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB

2013-11-24 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10126:
---

Status: Patch Available  (was: Open)

 LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB
 -

 Key: HADOOP-10126
 URL: https://issues.apache.org/jira/browse/HADOOP-10126
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Attachments: HADOOP-10126.patch


 Following message log message from LightWeightGSet is confusing.
 {noformat}2013-11-21 18:00:21,198 INFO org.apache.hadoop.util.GSet: 2.0% max 
 memory = 2.0 GB{noformat}, 
 where 2GB is max JVM memory, but log message confuses like 2% of max memory 
 is 2GB. 
 It can be better like this
 2.0% of max memory 2.0 GB = 40.9 MB



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10126) LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB

2013-11-24 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10126:
---

Attachment: HADOOP-10126.patch

Attaching patch to change the Log message.

 LightWeightGSet log message is confusing : 2.0% max memory = 2.0 GB
 -

 Key: HADOOP-10126
 URL: https://issues.apache.org/jira/browse/HADOOP-10126
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Attachments: HADOOP-10126.patch


 Following message log message from LightWeightGSet is confusing.
 {noformat}2013-11-21 18:00:21,198 INFO org.apache.hadoop.util.GSet: 2.0% max 
 memory = 2.0 GB{noformat}, 
 where 2GB is max JVM memory, but log message confuses like 2% of max memory 
 is 2GB. 
 It can be better like this
 2.0% of max memory 2.0 GB = 40.9 MB



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9921) daemon scripts should remove pid file on stop call after stop or process is found not running

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829056#comment-13829056
 ] 

Vinay commented on HADOOP-9921:
---

Hi,
Please can someone take a look at patch... !?

 daemon scripts should remove pid file on stop call after stop or process is 
 found not running
 -

 Key: HADOOP-9921
 URL: https://issues.apache.org/jira/browse/HADOOP-9921
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9921.patch


 daemon scripts should remove the pid file on stop call using daemon script.
 Should remove the pid file, even though process is not running.
 same pid file will be used by start command. At that time, if the same pid is 
 assigned to some other process, then start may fail.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9622) bzip2 codec can drop records when reading data in splits

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829028#comment-13829028
 ] 

Vinay commented on HADOOP-9622:
---

Thats sounds better jason. 
+1 for the existing patch then.

 bzip2 codec can drop records when reading data in splits
 

 Key: HADOOP-9622
 URL: https://issues.apache.org/jira/browse/HADOOP-9622
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9622-2.patch, HADOOP-9622-testcase.patch, 
 HADOOP-9622.patch, blockEndingInCR.txt.bz2, blockEndingInCRThenLF.txt.bz2


 Bzip2Codec.BZip2CompressionInputStream can cause records to be dropped when 
 reading them in splits based on where record delimiters occur relative to 
 compression block boundaries.
 Thanks to [~knoguchi] for discovering this problem while working on PIG-3251.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10111) Allow DU to be initialized with an initial value

2013-11-21 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13829044#comment-13829044
 ] 

Vinay commented on HADOOP-10111:


+1 Patch looks good and changes will surely improve datanode startup time. 
Thanks kihwal

 Allow DU to be initialized with an initial value
 

 Key: HADOOP-10111
 URL: https://issues.apache.org/jira/browse/HADOOP-10111
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10111.patch, HADOOP-10111.patch


 When a DU object is created, the du command runs right away. If the target 
 directory contains a huge number of files and directories, its constructor 
 may not return for many seconds.  It will be nice if it can be told to delay 
 the initial scan and use a specified initial used value.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-11-20 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9867:
--

Attachment: HADOOP-9867.patch

Attaching a patch with the test mentioned by Jason.

Reading one more record if the split ends between the delimiter bytes.

Please review.

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2, 0.23.9, 2.2.0
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek
Priority: Critical
 Attachments: HADOOP-9867.patch


 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-11-20 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9867:
--

Attachment: HADOOP-9867.patch

Updated possible NPE

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2, 0.23.9, 2.2.0
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek
Priority: Critical
 Attachments: HADOOP-9867.patch, HADOOP-9867.patch


 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-11-20 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9867:
--

Assignee: Vinay
  Status: Patch Available  (was: Open)

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.2.0, 0.23.9, 0.20.2
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-9867.patch, HADOOP-9867.patch


 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9622) bzip2 codec can drop records when reading data in splits

2013-11-20 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828448#comment-13828448
 ] 

Vinay commented on HADOOP-9622:
---

Thanks Jason for the patch for this tricky issue.
Patch looks good to me.

One small nit.
There are already two Test classes TestLineRecordReader in mapred and 
mapreduce.lib.input packages in hadoop-mapreduce-client-jobclient project. It 
will be better to move included tests to these classes instead of creating 
multiple classes.

 bzip2 codec can drop records when reading data in splits
 

 Key: HADOOP-9622
 URL: https://issues.apache.org/jira/browse/HADOOP-9622
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.4-alpha, 0.23.8
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Attachments: HADOOP-9622-2.patch, HADOOP-9622-testcase.patch, 
 HADOOP-9622.patch, blockEndingInCR.txt.bz2, blockEndingInCRThenLF.txt.bz2


 Bzip2Codec.BZip2CompressionInputStream can cause records to be dropped when 
 reading them in splits based on where record delimiters occur relative to 
 compression block boundaries.
 Thanks to [~knoguchi] for discovering this problem while working on PIG-3251.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-11-20 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828453#comment-13828453
 ] 

Vinay commented on HADOOP-9867:
---

Thanks Jason, I prefer waiting for HADOOP-9622 to be committed. 
Meanwhile I will try to update SplitLineReader offline. 

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2, 0.23.9, 2.2.0
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek
Assignee: Vinay
Priority: Critical
 Attachments: HADOOP-9867.patch, HADOOP-9867.patch


 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827338#comment-13827338
 ] 

Vinay commented on HADOOP-9870:
---

AFAIK, java process, processes its commandline VM args sequentially. If the 
same VM argument is set multiple times,it will choose the last one as its value.
Even though hadoop have many -Xmx configurations, it will take the last one in 
the list. 
User need not confuse about the other JVM argument (-Xmx1000m) as that will 
help if user didnot configure anything ( in case HADOOP_CONF_DIR is different 
and it dont have hadoop-env.sh file).

So I am not seeing any issue here.

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-11-19 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827363#comment-13827363
 ] 

Vinay commented on HADOOP-9870:
---

bq. I haven't found any documents that said the jvm would pick the last one
Yes you are right. I too dint find any explicit document in hadoop mentioned 
about that. But we tested it and found that later argument value only it will 
use. And we are using in our clusters by configuring higher value than default 
of 1000m. 
User specied opts are added at last of the command line list just before the 
classname just make sure that their parameters take effect

 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
 Attachments: HADOOP-9870.patch, HADOOP-9870.patch, HADOOP-9870.patch


 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10115) Exclude duplicate jars in hadoop package under different component's lib

2013-11-18 Thread Vinay (JIRA)
Vinay created HADOOP-10115:
--

 Summary: Exclude duplicate jars in hadoop package under different 
component's lib
 Key: HADOOP-10115
 URL: https://issues.apache.org/jira/browse/HADOOP-10115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0, 3.0.0
Reporter: Vinay
Assignee: Vinay



In the hadoop package distribution there are more than 90% of the jars are 
duplicated in multiple places.
For Ex:
almost all jars in share/hadoop/hdfs/lib are already there in 
share/hadoop/common/lib

Same case for all other lib in share directory.

Anyway for all the daemon processes all directories are added to classpath.

So to reduce the package distribution size and the classpath overhead, remove 
the duplicate jars from the distribution.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10115) Exclude duplicate jars in hadoop package under different component's lib

2013-11-18 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10115:
---

Status: Patch Available  (was: Open)

 Exclude duplicate jars in hadoop package under different component's lib
 

 Key: HADOOP-10115
 URL: https://issues.apache.org/jira/browse/HADOOP-10115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0, 3.0.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10115.patch


 In the hadoop package distribution there are more than 90% of the jars are 
 duplicated in multiple places.
 For Ex:
 almost all jars in share/hadoop/hdfs/lib are already there in 
 share/hadoop/common/lib
 Same case for all other lib in share directory.
 Anyway for all the daemon processes all directories are added to classpath.
 So to reduce the package distribution size and the classpath overhead, remove 
 the duplicate jars from the distribution.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10115) Exclude duplicate jars in hadoop package under different component's lib

2013-11-18 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10115:
---

Attachment: HADOOP-10115.patch

Uploading a patch, which checks for the duplicate lib. If not already present 
then only it will copy.

 Exclude duplicate jars in hadoop package under different component's lib
 

 Key: HADOOP-10115
 URL: https://issues.apache.org/jira/browse/HADOOP-10115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.2.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10115.patch


 In the hadoop package distribution there are more than 90% of the jars are 
 duplicated in multiple places.
 For Ex:
 almost all jars in share/hadoop/hdfs/lib are already there in 
 share/hadoop/common/lib
 Same case for all other lib in share directory.
 Anyway for all the daemon processes all directories are added to classpath.
 So to reduce the package distribution size and the classpath overhead, remove 
 the duplicate jars from the distribution.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9991) Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions

2013-11-18 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13826242#comment-13826242
 ] 

Vinay commented on HADOOP-9991:
---

Ok Steve. No problem.
Filed HADOOP-10115 for the packaging duplicate jars issue. Thanks

 Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions
 -

 Key: HADOOP-9991
 URL: https://issues.apache.org/jira/browse/HADOOP-9991
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.3.0, 2.1.1-beta
Reporter: Steve Loughran
 Attachments: hadoop-9991-v1.txt


 If you try using Hadoop downstream with a classpath shared with HBase and 
 Accumulo, you soon discover how messy the dependencies are.
 Hadoop's side of this problem is
 # not being up to date with some of the external releases of common JARs
 # not locking down/excluding inconsistent versions of artifacts provided down 
 the dependency graph



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-10115) Exclude duplicate jars in hadoop package under different component's lib

2013-11-18 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-10115:
---

Attachment: HADOOP-10115.patch

small correction

 Exclude duplicate jars in hadoop package under different component's lib
 

 Key: HADOOP-10115
 URL: https://issues.apache.org/jira/browse/HADOOP-10115
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.2.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10115.patch, HADOOP-10115.patch


 In the hadoop package distribution there are more than 90% of the jars are 
 duplicated in multiple places.
 For Ex:
 almost all jars in share/hadoop/hdfs/lib are already there in 
 share/hadoop/common/lib
 Same case for all other lib in share directory.
 Anyway for all the daemon processes all directories are added to classpath.
 So to reduce the package distribution size and the classpath overhead, remove 
 the duplicate jars from the distribution.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9991) Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions

2013-11-17 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13824863#comment-13824863
 ] 

Vinay commented on HADOOP-9991:
---

Thanks Steve for initiating this major task.
Adding to above points, there are many duplicate jars in the distribution.
such as,
hdfs/lib, mapreduce/lib, tools/lib and yarn/lib  contains ~90% of the same jars 
present in common/lib.

Is there any specific reason to keep these multiple copies of the jars.?

 Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions
 -

 Key: HADOOP-9991
 URL: https://issues.apache.org/jira/browse/HADOOP-9991
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.3.0, 2.1.1-beta
Reporter: Steve Loughran
 Attachments: hadoop-9991-v1.txt


 If you try using Hadoop downstream with a classpath shared with HBase and 
 Accumulo, you soon discover how messy the dependencies are.
 Hadoop's side of this problem is
 # not being up to date with some of the external releases of common JARs
 # not locking down/excluding inconsistent versions of artifacts provided down 
 the dependency graph



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9114) After defined the dfs.checksum.type as the NULL, write file and hflush will through java.lang.ArrayIndexOutOfBoundsException

2013-11-12 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13820110#comment-13820110
 ] 

Vinay commented on HADOOP-9114:
---

Thanks sathish for posting the patch.
+1, patch looks good to me.

 After defined the dfs.checksum.type as the NULL, write file and hflush will 
 through java.lang.ArrayIndexOutOfBoundsException
 

 Key: HADOOP-9114
 URL: https://issues.apache.org/jira/browse/HADOOP-9114
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: liuyang
Priority: Minor
 Attachments: FSOutputSummer.java.patch, HADOOP-9114-001.patch


 when I test the characteristic parameter about dfs.checksum.type. The value 
 can be defined as NULL,CRC32C,CRC32. It's ok when the value is CRC32C or 
 CRC32, but the client will through java.lang.ArrayIndexOutOfBoundsException 
 when the value is configured NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9505) Specifying checksum type to NULL can cause write failures with AIOBE

2013-11-12 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9505:
--

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 Specifying checksum type to NULL can cause write failures with AIOBE
 

 Key: HADOOP-9505
 URL: https://issues.apache.org/jira/browse/HADOOP-9505
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Uma Maheswara Rao G
Assignee: Vinay
Priority: Minor
 Attachments: HADOOP-9505.patch


 I have created a file with checksum disable option and I am seeing 
 ArrayIndexOutOfBoundsException.
 {code}
 out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
 .getInt(io.file.buffer.size, 4096), replFactor, fs
 .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
 {code}
 See the trace here:
 {noformat}
 java.lang.ArrayIndexOutOfBoundsException: 0
   at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
   at 
 org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
   at java.io.DataOutputStream.write(DataOutputStream.java:90)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
   at 
 org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
 {noformat}
 In FSOutputSummer#int2byte will not check any bytes length, so, do you think 
 we have to to check the length then only we call this in CRC NULL case, as 
 there will not be any checksum bytes?
 {code}
 static byte[] int2byte(int integer, byte[] bytes) {
 bytes[0] = (byte)((integer  24)  0xFF);
 bytes[1] = (byte)((integer  16)  0xFF);
 bytes[2] = (byte)((integer   8)  0xFF);
 bytes[3] = (byte)((integer   0)  0xFF);
 return bytes;
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-10081) Client.setupIOStreams can leak socket resources on exception or error

2013-11-12 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13820118#comment-13820118
 ] 

Vinay commented on HADOOP-10081:


+1 Patch looks good for me. 


 Client.setupIOStreams can leak socket resources on exception or error
 -

 Key: HADOOP-10081
 URL: https://issues.apache.org/jira/browse/HADOOP-10081
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.23.9, 2.2.0
Reporter: Jason Lowe
Assignee: Tsuyoshi OZAWA
Priority: Critical
 Attachments: HADOOP-10081.1.patch


 The setupIOStreams method in org.apache.hadoop.ipc.Client can leak socket 
 resources if an exception is thrown before the inStream and outStream local 
 variables are assigned to this.in and this.out, respectively.  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2013-11-12 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13820120#comment-13820120
 ] 

Vinay commented on HADOOP-9905:
---

Any more updation required on this jira..?


 remove dependency of zookeeper for hadoop-client
 

 Key: HADOOP-9905
 URL: https://issues.apache.org/jira/browse/HADOOP-9905
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9905.patch


 zookeeper dependency was added for ZKFC, which will not be used by client.
 Better remove the dependency of zookeeper jar for hadoop-client



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2013-11-12 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13820125#comment-13820125
 ] 

Vinay commented on HADOOP-9922:
---

Any more update required on this issue..?

I guess nobody else building hadoop in windows 32 bit.. ;)

 hadoop windows native build will fail in 32 bit machine
 ---

 Key: HADOOP-9922
 URL: https://issues.apache.org/jira/browse/HADOOP-9922
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9922.patch


 Building Hadoop in windows 32 bit machine fails as native project is not 
 having Win32 configuration



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-9114) After defined the dfs.checksum.type as the NULL, write file and hflush will through java.lang.ArrayIndexOutOfBoundsException

2013-11-12 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9114:
--

Attachment: HADOOP-9114-002.patch

Updated the patch with format fixes. Please review

 After defined the dfs.checksum.type as the NULL, write file and hflush will 
 through java.lang.ArrayIndexOutOfBoundsException
 

 Key: HADOOP-9114
 URL: https://issues.apache.org/jira/browse/HADOOP-9114
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: liuyang
Priority: Minor
 Attachments: FSOutputSummer.java.patch, HADOOP-9114-001.patch, 
 HADOOP-9114-002.patch


 when I test the characteristic parameter about dfs.checksum.type. The value 
 can be defined as NULL,CRC32C,CRC32. It's ok when the value is CRC32C or 
 CRC32, but the client will through java.lang.ArrayIndexOutOfBoundsException 
 when the value is configured NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HADOOP-9114) After defined the dfs.checksum.type as the NULL, write file and hflush will through java.lang.ArrayIndexOutOfBoundsException

2013-11-12 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13820811#comment-13820811
 ] 

Vinay commented on HADOOP-9114:
---

I am not able to assign this issue to sathish. May be contributor addition 
required in jira?

 After defined the dfs.checksum.type as the NULL, write file and hflush will 
 through java.lang.ArrayIndexOutOfBoundsException
 

 Key: HADOOP-9114
 URL: https://issues.apache.org/jira/browse/HADOOP-9114
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha
Reporter: liuyang
Priority: Minor
 Attachments: FSOutputSummer.java.patch, HADOOP-9114-001.patch, 
 HADOOP-9114-002.patch


 when I test the characteristic parameter about dfs.checksum.type. The value 
 can be defined as NULL,CRC32C,CRC32. It's ok when the value is CRC32C or 
 CRC32, but the client will through java.lang.ArrayIndexOutOfBoundsException 
 when the value is configured NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HADOOP-10071) under construction files deletion after snapshot+checkpoint+nn restart leads nn safemode

2013-10-25 Thread Vinay (JIRA)
Vinay created HADOOP-10071:
--

 Summary: under construction files deletion after 
snapshot+checkpoint+nn restart leads nn safemode
 Key: HADOOP-10071
 URL: https://issues.apache.org/jira/browse/HADOOP-10071
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay


1. allow snapshots under dir /foo
2. create a file /foo/test/bar and start writing to it
3. create a snapshot s1 under /foo after block is allocated and some data has 
been written to it
4. Delete the directory /foo/test
4. wait till checkpoint or do saveNameSpace
5. restart NN.

NN enters to safemode.

Analysis:
Snapshot nodes loaded from fsimage are always complete and all blocks will be 
in COMPLETE state. 
So when the Datanode reports RBW blocks those will not be updated in blocksmap.
Some of the FINALIZED blocks will be marked as corrupt due to length mismatch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HADOOP-8476) Remove duplicate VM arguments for hadoop deamon

2013-08-31 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-8476:
--

Status: Open  (was: Patch Available)

 Remove duplicate VM arguments for hadoop deamon
 ---

 Key: HADOOP-8476
 URL: https://issues.apache.org/jira/browse/HADOOP-8476
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Attachments: HADOOP-8476.patch, HADOOP-8476.patch


 remove duplicate VM arguments passed to hadoop daemon
 Following are the VM arguments currently duplicated.
 {noformat}-Dproc_namenode
 -Xmx1000m
 -Djava.net.preferIPv4Stack=true
 -Xmx128m
 -Xmx128m
 -Dhadoop.log.dir=/home/nn2/logs
 -Dhadoop.log.file=hadoop-root-namenode-HOST-xx-xx-xx-105.log
 -Dhadoop.home.dir=/home/nn2/
 -Dhadoop.id.str=root
 -Dhadoop.root.logger=INFO,RFA
 -Dhadoop.policy.file=hadoop-policy.xml
 -Djava.net.preferIPv4Stack=true
 -Dhadoop.security.logger=INFO,RFAS
 -Dhdfs.audit.logger=INFO,NullAppender
 -Dhadoop.security.logger=INFO,RFAS
 -Dhdfs.audit.logger=INFO,NullAppender
 -Dhadoop.security.logger=INFO,RFAS
 -Dhdfs.audit.logger=INFO,NullAppender
 -Dhadoop.security.logger=INFO,RFAS{noformat}
  
 In above VM argumants -Xmx1000m will be Overridden by -Xmx128m.
 BTW Other duplicate arguments wont harm

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9921) daemon scripts should remove pid file on stop call after stop or process is found not running

2013-08-30 Thread Vinay (JIRA)
Vinay created HADOOP-9921:
-

 Summary: daemon scripts should remove pid file on stop call after 
stop or process is found not running
 Key: HADOOP-9921
 URL: https://issues.apache.org/jira/browse/HADOOP-9921
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay


daemon scripts should remove the pid file on stop call using daemon script.

Should remove the pid file, even though process is not running.

same pid file will be used by start command. At that time, if the same pid is 
assigned to some other process, then start may fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9921) daemon scripts should remove pid file on stop call after stop or process is found not running

2013-08-30 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9921:
--

Attachment: HADOOP-9921.patch

Attached patch, Please review

 daemon scripts should remove pid file on stop call after stop or process is 
 found not running
 -

 Key: HADOOP-9921
 URL: https://issues.apache.org/jira/browse/HADOOP-9921
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9921.patch


 daemon scripts should remove the pid file on stop call using daemon script.
 Should remove the pid file, even though process is not running.
 same pid file will be used by start command. At that time, if the same pid is 
 assigned to some other process, then start may fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2013-08-30 Thread Vinay (JIRA)
Vinay created HADOOP-9922:
-

 Summary: hadoop windows native build will fail in 32 bit machine
 Key: HADOOP-9922
 URL: https://issues.apache.org/jira/browse/HADOOP-9922
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay


Building Hadoop in windows 32 bit machine fails as native project is not having 
Win32 configuration

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2013-08-30 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9922:
--

Attachment: HADOOP-9922.patch

This patch has solved issue for me. Please review.

 hadoop windows native build will fail in 32 bit machine
 ---

 Key: HADOOP-9922
 URL: https://issues.apache.org/jira/browse/HADOOP-9922
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
 Attachments: HADOOP-9922.patch


 Building Hadoop in windows 32 bit machine fails as native project is not 
 having Win32 configuration

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9922) hadoop windows native build will fail in 32 bit machine

2013-08-30 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9922:
--

Assignee: Vinay
  Status: Patch Available  (was: Open)

 hadoop windows native build will fail in 32 bit machine
 ---

 Key: HADOOP-9922
 URL: https://issues.apache.org/jira/browse/HADOOP-9922
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9922.patch


 Building Hadoop in windows 32 bit machine fails as native project is not 
 having Win32 configuration

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9921) daemon scripts should remove pid file on stop call after stop or process is found not running

2013-08-30 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9921:
--

Affects Version/s: 3.0.0
   2.1.0-beta
   Status: Patch Available  (was: Open)

 daemon scripts should remove pid file on stop call after stop or process is 
 found not running
 -

 Key: HADOOP-9921
 URL: https://issues.apache.org/jira/browse/HADOOP-9921
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.1.0-beta, 3.0.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9921.patch


 daemon scripts should remove the pid file on stop call using daemon script.
 Should remove the pid file, even though process is not running.
 same pid file will be used by start command. At that time, if the same pid is 
 assigned to some other process, then start may fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2013-08-26 Thread Vinay (JIRA)
Vinay created HADOOP-9905:
-

 Summary: remove dependency of zookeeper for hadoop-client
 Key: HADOOP-9905
 URL: https://issues.apache.org/jira/browse/HADOOP-9905
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay


zookeeper dependency was added for ZKFC, which will not be used by client.
Better remove the dependency of zookeeper jar for hadoop-client

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2013-08-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9905:
--

Attachment: HADOOP-9905.patch

Attaching patch for exclusion

 remove dependency of zookeeper for hadoop-client
 

 Key: HADOOP-9905
 URL: https://issues.apache.org/jira/browse/HADOOP-9905
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9905.patch


 zookeeper dependency was added for ZKFC, which will not be used by client.
 Better remove the dependency of zookeeper jar for hadoop-client

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2013-08-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9905:
--

Status: Patch Available  (was: Open)

 remove dependency of zookeeper for hadoop-client
 

 Key: HADOOP-9905
 URL: https://issues.apache.org/jira/browse/HADOOP-9905
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9905.patch


 zookeeper dependency was added for ZKFC, which will not be used by client.
 Better remove the dependency of zookeeper jar for hadoop-client

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client

2013-08-26 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9905:
--

Affects Version/s: 2.0.6-alpha

 remove dependency of zookeeper for hadoop-client
 

 Key: HADOOP-9905
 URL: https://issues.apache.org/jira/browse/HADOOP-9905
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-9905.patch


 zookeeper dependency was added for ZKFC, which will not be used by client.
 Better remove the dependency of zookeeper jar for hadoop-client

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8160) HardLink.getLinkCount() is getting stuck in eclipse ( Cygwin) for long file names, due to MS-Dos style Path.

2013-08-22 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-8160:
--

Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

This is no more a problem as Windows support is implemented and hardlink count 
is found using winutils.exe.

Closing as not a problem.

 HardLink.getLinkCount() is getting stuck in eclipse ( Cygwin) for long file 
 names, due to MS-Dos style Path.
 

 Key: HADOOP-8160
 URL: https://issues.apache.org/jira/browse/HADOOP-8160
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.1, 0.24.0
 Environment: Cygwin
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: HADOOP-8160.patch

   Original Estimate: 2m
  Remaining Estimate: 2m

 HardLink.getLinkCount() is getting stuck in cygwin for long file names, due 
 to MS-DOS style path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9505) Specifying checksum type to NULL can cause write failures with AIOBE

2013-08-12 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9505:
--

Attachment: HADOOP-9505.patch

Attached a patch for the case of CRC type as NULL

 Specifying checksum type to NULL can cause write failures with AIOBE
 

 Key: HADOOP-9505
 URL: https://issues.apache.org/jira/browse/HADOOP-9505
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Uma Maheswara Rao G
Priority: Minor
 Attachments: HADOOP-9505.patch


 I have created a file with checksum disable option and I am seeing 
 ArrayIndexOutOfBoundsException.
 {code}
 out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
 .getInt(io.file.buffer.size, 4096), replFactor, fs
 .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
 {code}
 See the trace here:
 {noformat}
 java.lang.ArrayIndexOutOfBoundsException: 0
   at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
   at 
 org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
   at java.io.DataOutputStream.write(DataOutputStream.java:90)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
   at 
 org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
 {noformat}
 In FSOutputSummer#int2byte will not check any bytes length, so, do you think 
 we have to to check the length then only we call this in CRC NULL case, as 
 there will not be any checksum bytes?
 {code}
 static byte[] int2byte(int integer, byte[] bytes) {
 bytes[0] = (byte)((integer  24)  0xFF);
 bytes[1] = (byte)((integer  16)  0xFF);
 bytes[2] = (byte)((integer   8)  0xFF);
 bytes[3] = (byte)((integer   0)  0xFF);
 return bytes;
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9505) Specifying checksum type to NULL can cause write failures with AIOBE

2013-08-12 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HADOOP-9505:
--

Assignee: Vinay
  Status: Patch Available  (was: Open)

 Specifying checksum type to NULL can cause write failures with AIOBE
 

 Key: HADOOP-9505
 URL: https://issues.apache.org/jira/browse/HADOOP-9505
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.1.0-beta
Reporter: Uma Maheswara Rao G
Assignee: Vinay
Priority: Minor
 Attachments: HADOOP-9505.patch


 I have created a file with checksum disable option and I am seeing 
 ArrayIndexOutOfBoundsException.
 {code}
 out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
 .getInt(io.file.buffer.size, 4096), replFactor, fs
 .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
 {code}
 See the trace here:
 {noformat}
 java.lang.ArrayIndexOutOfBoundsException: 0
   at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
   at 
 org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
   at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
   at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
   at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
   at java.io.DataOutputStream.write(DataOutputStream.java:90)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
   at 
 org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
 {noformat}
 In FSOutputSummer#int2byte will not check any bytes length, so, do you think 
 we have to to check the length then only we call this in CRC NULL case, as 
 there will not be any checksum bytes?
 {code}
 static byte[] int2byte(int integer, byte[] bytes) {
 bytes[0] = (byte)((integer  24)  0xFF);
 bytes[1] = (byte)((integer  16)  0xFF);
 bytes[2] = (byte)((integer   8)  0xFF);
 bytes[3] = (byte)((integer   0)  0xFF);
 return bytes;
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9755) HADOOP-9164 breaks the windows native build

2013-07-23 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13717961#comment-13717961
 ] 

Vinay commented on HADOOP-9755:
---

Thanks for resolving the issues.

 HADOOP-9164 breaks the windows native build
 ---

 Key: HADOOP-9755
 URL: https://issues.apache.org/jira/browse/HADOOP-9755
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Binglin Chang
Priority: Blocker
 Attachments: HADOOP-9755.patch, HADOOP-9755.v2.patch


 After HADOOP-9164 hadooop windows native build is broken.
 {noformat}  NativeCodeLoader.c
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'Dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2146: syntax error 
 : missing ';' before identifier 'dl_info' 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(42): error C2143: syntax error 
 : missing ';' before 'type' 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'ret' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2224: left of 
 '.dli_fname' must have struct/union type 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): fatal error C1903: unable 
 to recover from previous error(s); stopping compilation 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
   NativeCrc32.c
 Done Building Project 
 D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj 
 (default targets) -- FAILED.
 Done Building Project 
 D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.sln 
 (default targets) -- FAILED.{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9755) HADOOP-9164 breaks the windows native build

2013-07-23 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13717963#comment-13717963
 ] 

Vinay commented on HADOOP-9755:
---

I have small doubt.. HADOOP-9759 is created after this jira atleast by seeing 
the Jira Id, then how come this will be duplicate of HADOOP-9759.

 HADOOP-9164 breaks the windows native build
 ---

 Key: HADOOP-9755
 URL: https://issues.apache.org/jira/browse/HADOOP-9755
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Binglin Chang
Priority: Blocker
 Attachments: HADOOP-9755.patch, HADOOP-9755.v2.patch


 After HADOOP-9164 hadooop windows native build is broken.
 {noformat}  NativeCodeLoader.c
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'Dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2146: syntax error 
 : missing ';' before identifier 'dl_info' 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(42): error C2143: syntax error 
 : missing ';' before 'type' 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'ret' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2224: left of 
 '.dli_fname' must have struct/union type 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): fatal error C1903: unable 
 to recover from previous error(s); stopping compilation 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
   NativeCrc32.c
 Done Building Project 
 D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj 
 (default targets) -- FAILED.
 Done Building Project 
 D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.sln 
 (default targets) -- FAILED.{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9755) HADOOP-9164 breaks the windows native build

2013-07-22 Thread Vinay (JIRA)
Vinay created HADOOP-9755:
-

 Summary: HADOOP-9164 breaks the windows native build
 Key: HADOOP-9755
 URL: https://issues.apache.org/jira/browse/HADOOP-9755
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Priority: Blocker


After HADOOP-9164 hadooop windows native build is broken.

{noformat}  NativeCodeLoader.c
src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'Dl_info' : 
undeclared identifier 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2146: syntax error : 
missing ';' before identifier 'dl_info' 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'dl_info' : 
undeclared identifier 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(42): error C2143: syntax error : 
missing ';' before 'type' 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'ret' : 
undeclared identifier 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'dl_info' : 
undeclared identifier 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2224: left of 
'.dli_fname' must have struct/union type 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
src\org\apache\hadoop\util\NativeCodeLoader.c(45): fatal error C1903: unable to 
recover from previous error(s); stopping compilation 
[D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
  NativeCrc32.c
Done Building Project 
D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj 
(default targets) -- FAILED.
Done Building Project 
D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.sln 
(default targets) -- FAILED.{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9755) HADOOP-9164 breaks the windows native build

2013-07-22 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13715187#comment-13715187
 ] 

Vinay commented on HADOOP-9755:
---

Thanks chang, its worked for me.

 HADOOP-9164 breaks the windows native build
 ---

 Key: HADOOP-9755
 URL: https://issues.apache.org/jira/browse/HADOOP-9755
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Binglin Chang
Priority: Blocker
 Attachments: HADOOP-9755.patch


 After HADOOP-9164 hadooop windows native build is broken.
 {noformat}  NativeCodeLoader.c
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'Dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2146: syntax error 
 : missing ';' before identifier 'dl_info' 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(41): error C2065: 'dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(42): error C2143: syntax error 
 : missing ';' before 'type' 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'ret' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2065: 'dl_info' : 
 undeclared identifier 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): error C2224: left of 
 '.dli_fname' must have struct/union type 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
 src\org\apache\hadoop\util\NativeCodeLoader.c(45): fatal error C1903: unable 
 to recover from previous error(s); stopping compilation 
 [D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]
   NativeCrc32.c
 Done Building Project 
 D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj 
 (default targets) -- FAILED.
 Done Building Project 
 D:\hdp2\hadoop-common-project\hadoop-common\src\main\native\native.sln 
 (default targets) -- FAILED.{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >