[jira] [Commented] (HADOOP-14988) WASB: Expose WASB status metrics as counters in Hadoop

2017-10-26 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16221693#comment-16221693
 ] 

Bikas Saha commented on HADOOP-14988:
-

/cc [~steve_l]

> WASB: Expose WASB status metrics as counters in Hadoop
> --
>
> Key: HADOOP-14988
> URL: https://issues.apache.org/jira/browse/HADOOP-14988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> It would be good to expose WASB status metrics (e.g 503) as Hadoop counters. 
> Here is an example from a spark job, where it ends up spending large amount 
> of time in retries. Adding hadoop counters would help in analyzing and tuning 
> long running tasks.
> {noformat}
> 2017-10-23 23:07:20,876 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:07:20,877 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: ResponseReceived: 
> threadId=99, Status=503, Elapsed(ms)=1, ETAG=null, contentLength=198, 
> requestMethod=GET
> 2017-10-23 23:07:21,877 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:07:21,879 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: ResponseReceived: 
> threadId=99, Status=503, Elapsed(ms)=2, ETAG=null, contentLength=198, 
> requestMethod=GET
> 2017-10-23 23:07:24,070 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:07:24,073 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: q:: ResponseReceived: threadId=99, Status=503, 
> Elapsed(ms)=3, ETAG=null, contentLength=198, requestMethod=GET
> 2017-10-23 23:07:27,917 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:07:27,920 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: ResponseReceived: 
> threadId=99, Status=503, Elapsed(ms)=2, ETAG=null, contentLength=198, 
> requestMethod=GET
> 2017-10-23 23:07:36,879 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:07:36,881 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: ResponseReceived: 
> threadId=99, Status=503, Elapsed(ms)=1, ETAG=null, contentLength=198, 
> requestMethod=GET
> 2017-10-23 23:07:54,786 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:07:54,789 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: ResponseReceived: 
> threadId=99, Status=503, Elapsed(ms)=3, ETAG=null, contentLength=198, 
> requestMethod=GET
> 2017-10-23 23:08:24,790 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> 2017-10-23 23:08:24,794 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept: SelfThrottlingIntercept:: ResponseReceived: 
> threadId=99, Status=503, Elapsed(ms)=4, ETAG=null, contentLength=198, 
> requestMethod=GET
> 2017-10-23 23:08:54,794 DEBUG [Executor task launch worker for task 2463] 
> azure.SelfThrottlingIntercept:  SelfThrottlingIntercept:: SendingRequest:   
> threadId=99, requestType=read , isFirstRequest=false, sleepDuration=0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13184) Add "Apache" to Hadoop project logo

2016-06-07 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15318946#comment-15318946
 ] 

Bikas Saha commented on HADOOP-13184:
-

+1 on 5

> Add "Apache" to Hadoop project logo
> ---
>
> Key: HADOOP-13184
> URL: https://issues.apache.org/jira/browse/HADOOP-13184
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Chris Douglas
>Assignee: Abhishek
>
> Many ASF projects include "Apache" in their logo. We should add it to Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11226) Add a configuration to set ipc.Client's traffic class with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2015-09-24 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14907009#comment-14907009
 ] 

Bikas Saha commented on HADOOP-11226:
-

Comments would help this line.
{code}+this.socket.setPerformancePreferences(1, 2, 0);{code}

> Add a configuration to set ipc.Client's traffic class with 
> IPTOS_LOWDELAY|IPTOS_RELIABILITY
> ---
>
> Key: HADOOP-11226
> URL: https://issues.apache.org/jira/browse/HADOOP-11226
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Gopal V
>Assignee: Gopal V
>  Labels: Infiniband
> Fix For: 2.8.0
>
> Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch, 
> HADOOP-11226.3.patch, HADOOP-11226.4.patch, HADOOP-11226.5.patch
>
>
> During heavy shuffle, packet loss for IPC packets was observed from a machine.
> Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
> packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11905) Abstraction for LocalDirAllocator

2015-05-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14540343#comment-14540343
 ] 

Bikas Saha commented on HADOOP-11905:
-

[~rkannan82] I have not seen this jira/patch in details and so I cannot comment 
on whether this change is useful or not for Hadoop. However, given that you are 
trying to make this change for some dependent logic in Tez, perhaps you can 
open a jira in Tez for this right away instead of waiting for jira to get 
committed. Tez was designed specifically to allow for such customizations 
without needing to change Hadoop/cluster infrastructure. So perhaps, if we 
understand what you are trying to achieve in Tez, then it might be possible to 
do that within Tez itself or even outside of it.

 Abstraction for LocalDirAllocator
 -

 Key: HADOOP-11905
 URL: https://issues.apache.org/jira/browse/HADOOP-11905
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.2
Reporter: Kannan Rajah
Assignee: Kannan Rajah
  Labels: BB2015-05-TBR
 Fix For: 2.7.1

 Attachments: 0001-Abstraction-for-local-disk-path-allocation.patch


 There are 2 abstractions used to write data to local disk.
 LocalDirAllocator: Allocate paths from a set of configured local directories.
 LocalFileSystem/RawLocalFileSystem: Read/write using java.io.* and java.nio.*
 In the current implementation, local disk is managed by guest OS and not 
 HDFS. The proposal is to provide a new abstraction that encapsulates the 
 above 2 abstractions and hides who manages the local disks. This enables us 
 to provide an alternate implementation where a DFS can manage the local disks 
 and it can be accessed using HDFS APIs. This means the DFS maintains a 
 namespace for node local directories and can create paths that are guaranteed 
 to be present on a specific node.
 Here is an example use case for Shuffle: When a mapper writes intermediate 
 data using this new implementation, it will continue write to local disk. 
 When a reducer needs to access data from a remote node, it can use HDFS APIs 
 with a path that points to that node’s local namespace instead of having to 
 use HTTP server to transfer the data across nodes.
 New Abstractions
 1. LocalDiskPathAllocator
 Interface to get file/directory paths from the local disk namespace.
 This contains all the APIs that are currently supported by LocalDirAllocator. 
 So we just need to change LocalDirAllocator to implement this new interface.
 2. LocalDiskUtil
 Helper class to get a handle to LocalDiskPathAllocator and the FileSystem
 that is used to manage those paths.
 By default, it will return LocalDirAllocator and LocalFileSystem.
 A supporting DFS can return DFSLocalDirAllocator and an instance of DFS.
 3. DFSLocalDirAllocator
 This is a generic implementation. An allocator is created for a specific 
 node. It uses Configuration object to get user configured base directory and 
 appends the node hostname to it. Hence the returned paths are within the node 
 local namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9478) Fix race conditions during the initialization of Configuration related to deprecatedKeyMap

2013-11-20 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828247#comment-13828247
 ] 

Bikas Saha commented on HADOOP-9478:


We noticed that the changes in jira caused client side deployment of Tez to 
have errors. 
Tez is designed to have a client side install. So we package Tez and its 
dependencies and upload that onto HDFS and those jars are used to run Tez job. 
Tez brings in mapreduce-client-core.jar as a dependency for InputFormats etc.
When we build Tez against trunk then the mapreduce-client-core.jar that we 
bring in uses DeprecatedDelta added in that jar. However, the Configuration in 
the cluster comes from the cluster deployed jars for hadoop common and that 
does not have DeprecationDelta. So the execution fails.
This basically means that if someone compiles MR from trunk and runs MR against 
a cluster deployed with 2.2 then MR will not work.

 Fix race conditions during the initialization of Configuration related to 
 deprecatedKeyMap
 --

 Key: HADOOP-9478
 URL: https://issues.apache.org/jira/browse/HADOOP-9478
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
 Environment: OS:
 CentOS release 6.3 (Final)
 JDK:
 java version 1.6.0_27
 Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
 Hadoop:
 hadoop-2.0.0-cdh4.1.3/hadoop-2.0.0-cdh4.2.0
 Security:
 Kerberos
Reporter: Dongyong Wang
Assignee: Colin Patrick McCabe
 Fix For: 2.2.1

 Attachments: HADOOP-9478.001.patch, HADOOP-9478.002.patch, 
 HADOOP-9478.003.patch, HADOOP-9478.004.patch, HADOOP-9478.005.patch, 
 hadoop-9478-1.patch, hadoop-9478-2.patch


 When we lanuch the client appliation which use kerberos security,the 
 FileSystem can't be create because the exception ' 
 java.lang.NoClassDefFoundError: Could not initialize class 
 org.apache.hadoop.security.SecurityUtil'.
 I check the exception stack trace,it maybe caused by the unsafe get operation 
 of the deprecatedKeyMap which used by the 
 org.apache.hadoop.conf.Configuration.
 So I write a simple test case:
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 public class HTest {
 public static void main(String[] args) throws Exception {
 Configuration conf = new Configuration();
 conf.addResource(core-site.xml);
 conf.addResource(hdfs-site.xml);
 FileSystem fileSystem = FileSystem.get(conf);
 System.out.println(fileSystem);
 System.exit(0);
 }
 }
 Then I launch this test case many times,the following exception is thrown:
 Exception in thread TGT Renewer for XXX 
 java.lang.ExceptionInInitializerError
  at 
 org.apache.hadoop.security.UserGroupInformation.getTGT(UserGroupInformation.java:719)
  at 
 org.apache.hadoop.security.UserGroupInformation.access$1100(UserGroupInformation.java:77)
  at 
 org.apache.hadoop.security.UserGroupInformation$1.run(UserGroupInformation.java:746)
  at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 16
  at java.util.HashMap.getEntry(HashMap.java:345)
  at java.util.HashMap.containsKey(HashMap.java:335)
  at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1989)
  at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1867)
  at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1785)
  at org.apache.hadoop.conf.Configuration.get(Configuration.java:712)
  at 
 org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:731)
  at 
 org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1047)
  at org.apache.hadoop.security.SecurityUtil.clinit(SecurityUtil.java:76)
  ... 4 more
 Exception in thread main java.io.IOException: Couldn't create proxy 
 provider class 
 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
  at 
 org.apache.hadoop.hdfs.NameNodeProxies.createFailoverProxyProvider(NameNodeProxies.java:453)
  at 
 org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:133)
  at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:436)
  at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:403)
  at 
 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:125)
  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2262)
  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
  at 
 org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
  at 

[jira] [Moved] (HADOOP-9938) Avoiding redundant Kerberos login for Zookeeper client in ActiveStandbyElector

2013-09-06 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha moved HDFS-5152 to HADOOP-9938:
--

Component/s: (was: security)
 security
Key: HADOOP-9938  (was: HDFS-5152)
Project: Hadoop Common  (was: Hadoop HDFS)

 Avoiding redundant Kerberos login for Zookeeper client in ActiveStandbyElector
 --

 Key: HADOOP-9938
 URL: https://issues.apache.org/jira/browse/HADOOP-9938
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
 Attachments: HDFS-5152.patch


 Based on the fix in HADOOP-8315, it's possible to deploy a secured HA cluster 
 with SASL support for connection with Zookeeper. However it requires extra 
 configuration for JAAS to initialize the Zookeeper client because the client 
 will do another login in it even when ZKFC service actually has already 
 passed the Kerberos login during its starting.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9906) Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public

2013-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13753398#comment-13753398
 ] 

Bikas Saha commented on HADOOP-9906:


Is the following backwards incompatible since we changed the method of a public 
class?
{code}
-  public static ListACL parseACLs(String aclString) {
+  public static ListACL parseACLs(String aclString) throws
+  BadAclFormatException {
{code}

 Move HAZKUtil to o.a.h.util.ZKUtil and make inner-classes public
 

 Key: HADOOP-9906
 URL: https://issues.apache.org/jira/browse/HADOOP-9906
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Fix For: 2.1.1-beta

 Attachments: hadoop-9906-1.patch, hadoop-9906-2.patch


 HAZKUtil defines a couple of exceptions - BadAclFormatException and 
 BadAuthFormatException - that can be made public for use in other components. 
 For instance, YARN-353 could use it in tests and ACL validation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9608) ZKFC should abort if it sees an unrecognized NN become active

2013-05-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13670907#comment-13670907
 ] 

Bikas Saha commented on HADOOP-9608:


To be clear, the other node that wasnt rebooted was not active at that time?

 ZKFC should abort if it sees an unrecognized NN become active
 -

 Key: HADOOP-9608
 URL: https://issues.apache.org/jira/browse/HADOOP-9608
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 3.0.0
Reporter: Todd Lipcon

 We recently had an issue where one NameNode and ZKFC was updated to a new 
 configuration/IP address but the ZKFC on the other node was not rebooted. 
 Then, next time a failover occurred, the second ZKFC was not able to become 
 active because the data in the ActiveBreadCrumb didn't match the data in its 
 own configuration:
 {code}
 org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of 
 election
 java.lang.IllegalArgumentException: Unable to determine service address for 
 namenode ''
 {code}
 To prevent this from happening, whenever the ZKFC sees a new NN become 
 active, it should check that it's properly able to instantiate a 
 ServiceTarget for it, and if not, abort (since this ZKFC wouldn't be able to 
 handle a failover successfully)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9608) ZKFC should abort if it sees an unrecognized NN become active

2013-05-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13670966#comment-13670966
 ] 

Bikas Saha commented on HADOOP-9608:


Do you mean because NNA's ZKFC does not have C's address or there is a fixed 
mapping of the ith NN to a server and that mapping is stale?
The proposed solution is to make every standby ZKFC restart when it discovers 
an active leader that cannot be connected to? This would mean all standby NN 
would be rebooting in the above scenario when C becomes master, right?

 ZKFC should abort if it sees an unrecognized NN become active
 -

 Key: HADOOP-9608
 URL: https://issues.apache.org/jira/browse/HADOOP-9608
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 3.0.0
Reporter: Todd Lipcon

 We recently had an issue where one NameNode and ZKFC was updated to a new 
 configuration/IP address but the ZKFC on the other node was not rebooted. 
 Then, next time a failover occurred, the second ZKFC was not able to become 
 active because the data in the ActiveBreadCrumb didn't match the data in its 
 own configuration:
 {code}
 org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of 
 election
 java.lang.IllegalArgumentException: Unable to determine service address for 
 namenode ''
 {code}
 To prevent this from happening, whenever the ZKFC sees a new NN become 
 active, it should check that it's properly able to instantiate a 
 ServiceTarget for it, and if not, abort (since this ZKFC wouldn't be able to 
 handle a failover successfully)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-04-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644338#comment-13644338
 ] 

Bikas Saha commented on HADOOP-9413:


bq. Did only a minor update to the patch to note some subtle differences 
between setExecutable on Windows and Unix. Check HADOOP-9525 for additional 
details. Thanks Chris for the great comment on HDFS-4610 that made me realize 
this!
Without a summary of the issue or links to exact explanatory comments in other 
jiras, its hard to ascertain what the issue is.

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.2.patch, 
 HADOOP-9413.commonfileutils.3.patch, HADOOP-9413.commonfileutils.4.patch, 
 HADOOP-9413.commonfileutils.5.patch, HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-04-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644935#comment-13644935
 ] 

Bikas Saha commented on HADOOP-9413:


Looks good to me.

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.2.patch, 
 HADOOP-9413.commonfileutils.3.patch, HADOOP-9413.commonfileutils.4.patch, 
 HADOOP-9413.commonfileutils.5.patch, HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-04-20 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637411#comment-13637411
 ] 

Bikas Saha commented on HADOOP-9413:


Thanks for doing this. Much better approach and functionally correct!

Enums with defined value will make this unnecessary. Minor though but typesafe.
{code}
+  if (desiredAccess != ACCESS_READ  desiredAccess != ACCESS_WRITE
+   desiredAccess != ACCESS_EXECUTE) {
{code}

The previous code was returning error when this failed. Not necessary anymore?
{code}
-  if (!AuthzFreeContext(hAuthzClientContext))
+
+GetEffectiveRightsForSidEnd:
+  if (hManager != NULL)
   {
-ret = GetLastError();
-goto GetEffectiveRightsForSidEnd;
+(void)AuthzFreeResourceManager(hManager);
+  }
+  if (hAuthzClientContext != NULL)
+  {
+(void)AuthzFreeContext(hAuthzClientContext);
   }
{code}

How about changing these to setWritable()? Thus testing the other new methods 
too. + end to end symmetry check also achieved.
{code}
+FileUtil.chmod(testFile.getAbsolutePath(), u-r);
+assertFalse(NativeIO.Windows.access(testFile.getAbsolutePath(),
+NativeIO.Windows.ACCESS_READ));
{code}

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.2.patch, 
 HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9488) FileUtil#createJarWithClassPath only substitutes environment variables from current process environment/does not support overriding when launching new process

2013-04-19 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-9488:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

+1. Committed to trunk.

 FileUtil#createJarWithClassPath only substitutes environment variables from 
 current process environment/does not support overriding when launching new 
 process
 --

 Key: HADOOP-9488
 URL: https://issues.apache.org/jira/browse/HADOOP-9488
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-9488.1.patch, HADOOP-9488.consolidated.1.patch, 
 HADOOP-9488.consolidated.2.patch


 {{FileUtil#createJarWithClassPath}} always uses {{System#getEnv}} for 
 substitution of environment variables in the classpath bundled into the jar 
 manifest.  YARN launches container processes with a different set of 
 environment variables, so the method needs to support providing environment 
 variables different from the current process.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8731) TestTrackerDistributedCacheManager fails on Windows

2013-04-02 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13619563#comment-13619563
 ] 

Bikas Saha commented on HADOOP-8731:


Patch looks good based on above comments. Its old and probably needs a rebase. 
Why have the following comments been taken out between the patches? And we mean 
EVERYONE read permissions?
{code}
-   * EXECUTE permissions for others
+   * EXECUTE permissions for others. On Windows, the visibility criteria
+   * is relaxed, and the cache path is public if the leaf component
+   * has EVERYONE permissions.
{code}

 TestTrackerDistributedCacheManager fails on Windows
 ---

 Key: HADOOP-8731
 URL: https://issues.apache.org/jira/browse/HADOOP-8731
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8731-PublicCache.2.patch, 
 HADOOP-8731-PublicCache.patch


 Jira tracking TestTrackerDistributedCacheManager test failure. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-03-31 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13618472#comment-13618472
 ] 

Bikas Saha commented on HADOOP-9413:


I am in favor of any proposal that leaves the code and expected functionality 
consistent. Ideally, structurally similar and functionally correct. 
Structurally similar but functionally broken may be acceptable for the short 
term as long as non-Windows continues to run correctly. What I would be wary of 
is leaving the code in a situation where it works in A but not in B because 
that is a pain to read, debug and maintain.

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform

2013-03-27 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13615506#comment-13615506
 ] 

Bikas Saha commented on HADOOP-9413:


Chris already has code that does the expected thing for the scenario in which 
the running process is checking whether it has read/write/execute permissions 
on a directory. We could move them into helper functions and use them. This is 
important because after this check is successful the process goes ahead and 
performs the action that depends on the check. So my preference would be to use 
the code that provides the expected functionality. We can improve that code 
later on.

 Introduce common utils for File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute that work cross-platform
 ---

 Key: HADOOP-9413
 URL: https://issues.apache.org/jira/browse/HADOOP-9413
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HADOOP-9413.commonfileutils.patch


 So far, we've seen many unittest and product bugs in Hadoop on Windows 
 because Java's APIs that manipulate with permissions do not work as expected. 
 We've addressed many of these problems on one-by-one basis (by either 
 changing code a bit or disabling the test). While debugging the remaining 
 unittest failures we continue to run into the same patterns of problems, and 
 instead of addressing them one-by-one, I propose that we expose a set of 
 equivalent wrapper APIs that will work well for all platforms.
 Scanning thru the codebase, this will actually be a simple change as there 
 are very few places that use File#setReadable/Writable/Executable and 
 File#canRead/Write/Execute (5 files in Common, 9 files in HDFS).
 HADOOP-8973 contains additional context on the problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2013-03-21 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13609191#comment-13609191
 ] 

Bikas Saha commented on HADOOP-9422:


According to Harsh, HADOOP_HOME is deprecated in trunk. In that case, we might 
have to remove the reference to HADOOP_HOME. [~qwertymaniac] Can you please 
point to the jira where HADOOP_HOME was deprecated?

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Hitesh Shah

 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9422) HADOOP_HOME should not be required to be set to be able to launch commands using hadoop.util.Shell

2013-03-20 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13608393#comment-13608393
 ] 

Bikas Saha commented on HADOOP-9422:


This check was originally added to branch-1-win where a lot of stuff was 
getting resolved via HADOOP_HOME.

 HADOOP_HOME should not be required to be set to be able to launch commands 
 using hadoop.util.Shell
 --

 Key: HADOOP-9422
 URL: https://issues.apache.org/jira/browse/HADOOP-9422
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Hitesh Shah

 Not sure why this is an enforced requirement especially in cases where a 
 deployment is done using multiple tar-balls ( one each for 
 common/hdfs/mapreduce/yarn ). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9400) Investigate emulating sticky bit directory permissions on Windows

2013-03-13 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601607#comment-13601607
 ] 

Bikas Saha commented on HADOOP-9400:


Which use case does this target?

 Investigate emulating sticky bit directory permissions on Windows
 -

 Key: HADOOP-9400
 URL: https://issues.apache.org/jira/browse/HADOOP-9400
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: Windows
Reporter: Arpit Agarwal
  Labels: windows
 Fix For: 3.0.0


 It should be possible to emulate sticky bit permissions on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-09 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13598115#comment-13598115
 ] 

Bikas Saha commented on HADOOP-8973:


What is the issue?

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 1-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: DiskChecker.proto.patch, HADOOP-8973.3.patch, 
 HADOOP-8973-branch-1-win.3.patch, HADOOP-8973-branch-trunk-win.2.patch, 
 HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-09 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13598151#comment-13598151
 ] 

Bikas Saha commented on HADOOP-8973:


bq. the issue is that the test fails when executed in the elevated context on 
Windows, or as root on Linux.
Do we know why thats the case?

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 1-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: DiskChecker.proto.patch, HADOOP-8973.3.patch, 
 HADOOP-8973-branch-1-win.3.patch, HADOOP-8973-branch-trunk-win.2.patch, 
 HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-08 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13596936#comment-13596936
 ] 

Bikas Saha commented on HADOOP-8973:


Thanks for following up on my comments. Looks good.
Minor comment. I think we added a check for java version and have used it to 
enable java api's for java 7, if available. If java 7 File.canWrite etc are 
going to work on Windows (like mentioned in some comments above) then we could 
enable the API path when (!Windows or Java6).
Please do open a jira to fix this issue in branch-1-win too.

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973.3.patch, 
 HADOOP-8973-branch-trunk-win.2.patch, HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-08 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597349#comment-13597349
 ] 

Bikas Saha commented on HADOOP-8973:


Makes sense

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973.3.patch, 
 HADOOP-8973-branch-trunk-win.2.patch, HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-08 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597647#comment-13597647
 ] 

Bikas Saha commented on HADOOP-8973:


+1 looks good!

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 1-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973.3.patch, HADOOP-8973-branch-1-win.3.patch, 
 HADOOP-8973-branch-trunk-win.2.patch, HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-08 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597650#comment-13597650
 ] 

Bikas Saha commented on HADOOP-8973:


bq. I would prefer that we make Shell#getSetPermissionCommand accept File 
instead of string
+1 for this. I missed it but Ivan did not. Guess he has had to fix more cases 
of string-for-file issues than me :P


 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 3.0.0, 1-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973.3.patch, HADOOP-8973-branch-1-win.3.patch, 
 HADOOP-8973-branch-trunk-win.2.patch, HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594927#comment-13594927
 ] 

Bikas Saha commented on HADOOP-8973:


bq. Can we please keep the scope of this jira limited to Windows compatibility 
for the current logic?
Not until we clearly understand what the flaw is that we are trying to fix. It 
this was branch-1 then I would agree. But on trunk we have the flexibility of 
not making stop-gap changes.

As far as the permissions is concerned. I think Hadoop really depends on a Unix 
like rwx permission model on the local disk. This is in theory unrelated to the 
permission model HFDS itself imposes for its own filesystem and which also 
happens to be the rwx model. This is the main reason why we wrote the winutil 
layer that exposes the rwx model on top of Windows ACL's for local disk. Thats 
why I am trying to understand why we need different checks here because it may 
imply that our translation layer is not working.
Your analysis might be correct and I would like an HDFS expert like 
[~sanjay.radia] to take a look.

This jira was opened because TestDiskChecker was failing right? Would you be 
kind enough to change checkDir(File) to use the same logic as checkDir(FS, 
Path, perm) and check if the test passes? Hopefully the change will be small 
and wont take much of your time. It will help ascertain if the implies function 
will work or not.

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.2.patch, 
 HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13594968#comment-13594968
 ] 

Bikas Saha commented on HADOOP-8973:


I synced offline with Arpit. I now see why the implies may be incorrect for 
scenarios you mention. In that case, I would really like this patch to fix the 
other broken checkDir also because that is incorrect too and it doesnt make 
sense to fix the same bug in 2 patches. This would also need to be backported 
to branch-1.

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.2.patch, 
 HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9373) Merge CHANGES.branch-trunk-win.txt to CHANGES.txt

2013-03-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13595203#comment-13595203
 ] 

Bikas Saha commented on HADOOP-9373:


+1 Looks good. Where do we record incompatibility, if any?

 Merge CHANGES.branch-trunk-win.txt to CHANGES.txt
 -

 Key: HADOOP-9373
 URL: https://issues.apache.org/jira/browse/HADOOP-9373
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-9373.patch, HADOOP-9373.patch


 This is to merge the changes from CHANGES.branch-trunk-win.txt to appropriate 
 CHANGES.txt files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593760#comment-13593760
 ] 

Bikas Saha commented on HADOOP-8973:


I dont have a preference either way on how this we perform the check. What I 
dont want is to see 2 code paths do different things for what should 
essentially be the same operation. It is confusing, inconsistent and not good 
to maintain. So please make both versions of checkDirs consistent, ideally 
without code duplication.

Now the question of how the check is done. It seems to me that what 
checkDir(FS, Path, Perm) is doing is sufficient for our use cases. The code to 
check permissions via the implies functions are used in a number of places in 
the code. What I am trying to understand is why the check is sufficient/correct 
for all those cases and not here. If there is an issue in that logic then we 
have issues in a number of critical places in the code which should be fixed. 
Its more than a question of cross-platform compatibility. If there is no issue 
that logic then its not clear why we need to do something different for this 
particular case. Does this make my questions clear?

We need to be careful about the performance impact of how the check is done. I 
see a number of places that call checkDirs() method for a collection of 
directories. Do we want to write to a temp file for all those cases in order to 
check for writability?

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.2.patch, 
 HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9232) JniBasedUnixGroupsMappingWithFallback fails on Windows with UnsatisfiedLinkError

2013-03-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13593776#comment-13593776
 ] 

Bikas Saha commented on HADOOP-9232:


I am not clear if we have fixed the root cause or the symptom?
It looks like JniBasedUnixGroupsMappingWithFallback checks for native impl 
being present and if not it falls back to Shell. Thereafter, it will use the 
native code and fail when the native code does not have the expected function.
In Windows, the native code is always present for other reasons. So we are 
never going to fall back to Shell. So basically there is no fallback.
In the patch we have fixed the missing function in the native jni code. But is 
that the complete fix? For any other/new jni based function we will be back to 
this situation because on Windows the native code is always loaded.

 JniBasedUnixGroupsMappingWithFallback fails on Windows with 
 UnsatisfiedLinkError
 

 Key: HADOOP-9232
 URL: https://issues.apache.org/jira/browse/HADOOP-9232
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, security
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Ivan Mitic
 Fix For: trunk-win

 Attachments: HADOOP-9232.branch-trunk-win.jnigroups.2.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.3.patch, 
 HADOOP-9232.branch-trunk-win.jnigroups.patch, HADOOP-9232.patch


 {{JniBasedUnixGroupsMapping}} calls native code which isn't implemented 
 properly for Windows, causing {{UnsatisfiedLinkError}}.  The fallback logic 
 in {{JniBasedUnixGroupsMappingWithFallback}} works by checking if the native 
 code is loaded during startup.  In this case, hadoop.dll is present and 
 loaded, but it doesn't contain the right code.  There will be no attempt to 
 fallback to {{ShellBasedUnixGroupsMapping}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13592795#comment-13592795
 ] 

Bikas Saha commented on HADOOP-8973:


During Datanode startup it checks if the data dir permissions are 755. Using 
the above logic, is there a hole in that check too because it does not imply 
that Datanode has permission to read and write to that directory? DiskChecker 
is used to perform that check too using another checkDir() method.

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13592877#comment-13592877
 ] 

Bikas Saha commented on HADOOP-8973:


Well. In that case, it would be a standard pattern everywhere because 
everywhere code simply checks the value of the permissions and not whether the 
process checking that value actually has the right membership wrt that value. 
Isnt it so? Irrespective of OS.

Also, one can make a case that checkDir(File dir) should end up calling 
checkDir(FileSystem, FilePath, rwx) instead of duplicating the logic.


 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2013-03-04 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13592988#comment-13592988
 ] 

Bikas Saha commented on HADOOP-8973:


Thats is a good idea to do if we need to find out if the current process user 
has certain permissions. But I guess the point we are debating on currently is 
whether we can make checkDisk(File) do what checkDisk(FS, Path, Perm) does. 
Perhaps by simply calling the second function. If checkDisk(FS, Path, Perm) 
meets our other needs then it should be enough. I think currently, the code 
checks for expected permissions, implicitly assuming the daemon processes to be 
running as the users who are supposed to have those permissions.

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9220) Unnecessary transition to standby in ActiveStandbyElector

2013-01-21 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559081#comment-13559081
 ] 

Bikas Saha commented on HADOOP-9220:


Not quite sure I understand. Todd had added a reference to the ZK client so 
that the Elector would only accept watch notifications from the last ZK client. 
That means only 1 ZK client would be driving the Elector.

 Unnecessary transition to standby in ActiveStandbyElector
 -

 Key: HADOOP-9220
 URL: https://issues.apache.org/jira/browse/HADOOP-9220
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-9220.patch, HADOOP-9220.patch


 When performing a manual failover from one HA node to a second, under some 
 circumstances the second node will transition from standby - active - 
 standby - active. This is with automatic failover enabled, so there is a ZK 
 cluster doing leader election.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HADOOP-9133) Windows tarball build fails when enlistment root path is long

2012-12-11 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha moved HDFS-4303 to HADOOP-9133:
--

 Target Version/s:   (was: trunk-win)
Affects Version/s: (was: trunk-win)
   trunk-win
  Key: HADOOP-9133  (was: HDFS-4303)
  Project: Hadoop Common  (was: Hadoop HDFS)

 Windows tarball build fails when enlistment root path is long
 -

 Key: HADOOP-9133
 URL: https://issues.apache.org/jira/browse/HADOOP-9133
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic

 Build error: 
  [copy] Copying 2 files to 
 I:\svn\branch-trunk-win\hadoop-hdfs-project\hadoop-hdfs-httpfs\target\hadoop-hdfs-httpfs-3.0.0-SNAPSHOT\share\hadoop\httpfs\tomca
 t\webapps\ROOT
  [copy] Copying 176 files to 
 I:\svn\branch-trunk-win\hadoop-hdfs-project\hadoop-hdfs-httpfs\target\hadoop-hdfs-httpfs-3.0.0-SNAPSHOT\share\hadoop\httpfs\tom
 cat\webapps\webhdfs
  [copy] Copied 23 empty directories to 1 empty directory under 
 I:\svn\branch-trunk-win\hadoop-hdfs-project\hadoop-hdfs-httpfs\target\hadoop-hdfs-httpfs-3.0.
 0-SNAPSHOT\share\hadoop\httpfs\tomcat\webapps\webhdfs
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-antrun-plugin:1.6:run (tar) @ hadoop-hdfs-httpfs ---
 [INFO] Executing tasks
 main:
  [exec] Traceback (most recent call last):
  [exec]   File dist-maketar.py, line 9, in module
  [exec] tar.add(dir_name, arcname=base_name)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1997, in add
  [exec] recursive, exclude, filter)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1973, in add
  [exec] tarinfo = self.gettarinfo(name, arcname)
  [exec]   File I:\git\tools\Python27\lib\tarfile.py, line 1845, in 
 gettarinfo
  [exec] statres = os.lstat(name)
  [exec] WindowsError: [Error 3] The system cannot find the path 
 specified: 
 'I:\\svn\\branch-trunk-win\\hadoop-hdfs-project\\hadoop-hdfs-httpfs\\target\\hado
 op-hdfs-httpfs-3.0.0-SNAPSHOT\\share\\hadoop\\httpfs\\tomcat\\webapps\\webh
  [exec] 
 dfs\\WEB-INF\\classes\\org\\apache\\hadoop\\lib\\service\\security\\DelegationTokenManagerService$DelegationTokenSecretManager.class'
 [INFO] 
 
 Shrink the enlistment root path to workaround the problem. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8958) ViewFs:Non absolute mount name failures when running multiple tests on Windows

2012-12-01 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507934#comment-13507934
 ] 

Bikas Saha commented on HADOOP-8958:


So the conclusion is that the test fix is not required anywhere in 
product/client code and is a configuration step.

 ViewFs:Non absolute mount name failures when running multiple tests on Windows
 --

 Key: HADOOP-8958
 URL: https://issues.apache.org/jira/browse/HADOOP-8958
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 3.0.0, trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-8958.2.patch, HADOOP-8958.patch


 This appears to be an issue with parsing a Windows-specific path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8958) ViewFs:Non absolute mount name failures when running multiple tests on Windows

2012-11-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507878#comment-13507878
 ] 

Bikas Saha commented on HADOOP-8958:


I actually liked the first base class approach better because that makes it 
more future proof and dev friendly.

 ViewFs:Non absolute mount name failures when running multiple tests on Windows
 --

 Key: HADOOP-8958
 URL: https://issues.apache.org/jira/browse/HADOOP-8958
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 3.0.0, trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-8958.2.patch, HADOOP-8958.patch


 This appears to be an issue with parsing a Windows-specific path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8958) ViewFs:Non absolute mount name failures when running multiple tests on Windows

2012-11-30 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13507879#comment-13507879
 ] 

Bikas Saha commented on HADOOP-8958:


Do you think the same issue might be triggered in any normal user scenario 
which does operations equivalent to the test?

 ViewFs:Non absolute mount name failures when running multiple tests on Windows
 --

 Key: HADOOP-8958
 URL: https://issues.apache.org/jira/browse/HADOOP-8958
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 3.0.0, trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-8958.2.patch, HADOOP-8958.patch


 This appears to be an issue with parsing a Windows-specific path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9062) hadoop-env.cmd overwrites the value of *_OPTS set before install

2012-11-19 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13500835#comment-13500835
 ] 

Bikas Saha commented on HADOOP-9062:


+1 lgtm.

 hadoop-env.cmd overwrites the value of *_OPTS set before install
 

 Key: HADOOP-9062
 URL: https://issues.apache.org/jira/browse/HADOOP-9062
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Ganeshan Iyer
 Fix For: 1-win

 Attachments: HADOOP-9062.1-win.001.patch


 The values of the following environment variables are overwritten in the 
 hadoop-env.cmd file. 
 HADOOP_NAMENODE_OPTS
 HADOOP_SECONDARYNAMENODE_OPTS
 HADOOP_DATANODE_OPTS
 HADOOP_BALANCER_OPTS
 HADOOP_JOBTRACKER_OPTS
 HADOOP_TASKTRACKER_OPTS
 This blocks us from using these variables for setting other properties. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9027) Build fails on Windows without sh/sed/echo in the path

2012-11-14 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497323#comment-13497323
 ] 

Bikas Saha commented on HADOOP-9027:


This is great for folks trying to build on Windows!
Perhaps we should have left behind the shell scripts for folks on Linux who are 
doing just fine currently without python. This forces them to changes existing 
setups to include python.

 Build fails on Windows without sh/sed/echo in the path
 --

 Key: HADOOP-9027
 URL: https://issues.apache.org/jira/browse/HADOOP-9027
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 1-win

 Attachments: HADOOP-9027.branch-1-win.cleanbuild.2.patch, 
 HADOOP-9027.branch-1-win.cleanbuild.patch


 Branch-1-win still has a dependency on a few unix tools in compile time. 
 Tracking Jira to remove this dependency.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9006) Winutils should keep Administrators privileges intact

2012-11-06 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13492109#comment-13492109
 ] 

Bikas Saha commented on HADOOP-9006:


Havent gone through the patch but a thing to keep in mind would be to avoid 
appending new data to the ACL objects on files/folders or we may end up filling 
up the allowed space and then failing subsequent operations. I have encountered 
this problem in some other project so thought would mention it.

 Winutils should keep Administrators privileges intact
 -

 Key: HADOOP-9006
 URL: https://issues.apache.org/jira/browse/HADOOP-9006
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9006-branch-1-win.patch


 This issue was originally discovered by [~ivanmi]. Cite his words as follows.
 {quote}
 Current by design behavior is for winutils to ACL the folders only for the 
 user passed in thru chmod/chown. This causes some un-natural side effects in 
 cases where Hadoop services run in the context of a non-admin user. For 
 example, Administrators on the box will no longer be able to:
  - delete files created in the context of Hadoop services (other users)
  - check the size of the folder where HDFS blocks are stored
 {quote}
 In my opinion, it is natural for some special accounts on Windows to be able 
 to access all the folders, including Hadoop folders. This is similar to Linux 
 in the way root users on Linux can always access any directories regardless 
 the permissions set the those directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9008) Building hadoop tarball fails on Windows

2012-11-02 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13489926#comment-13489926
 ] 

Bikas Saha commented on HADOOP-9008:


or maven plugins like the ones used to compile protobufs without sh scripts.

 Building hadoop tarball fails on Windows
 

 Key: HADOOP-9008
 URL: https://issues.apache.org/jira/browse/HADOOP-9008
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: trunk-win
Reporter: Ivan Mitic

 Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests 
 -Dtar}} fails on Windows.
 Build system generates sh scripts that execute build tasks what does not work 
 on Windows without Cygwin. It might make sense to apply the same pattern as 
 in HADOOP-8924, and use python instead of sh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2012-10-31 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487796#comment-13487796
 ] 

Bikas Saha commented on HADOOP-8973:


Can we try the following approach that has the advantage of being common across 
platforms.
Get FileStatus of the file/dir using FileSystem API
getPermission() from FileStatus
There are a bunch of impliesRead(), impliesWrite() functions that tell whether 
a given FsPermission object allows read/write etc. I am sorry I don't remember 
where these functions are.
Using these functions one can get the equivalent of isReadable/isWritable


 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8973) DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS ACLs

2012-10-31 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488471#comment-13488471
 ] 

Bikas Saha commented on HADOOP-8973:


On branch-1-win internally winutils would be used to infer the Windows ACL's 
into rwx permissions. If those changes have been ported over then it should 
work on Windows too. Other places use similar logic which can be checked by 
looking for references to the FsAction.implies() method.

 DiskChecker cannot reliably detect an inaccessible disk on Windows with NTFS 
 ACLs
 -

 Key: HADOOP-8973
 URL: https://issues.apache.org/jira/browse/HADOOP-8973
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: trunk-win
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-8973-branch-trunk-win.patch


 DiskChecker.checkDir uses File.canRead, File.canWrite, and File.canExecute to 
 check if a directory is inaccessible.  These APIs are not reliable on Windows 
 with NTFS ACLs due to a known JVM bug.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-30 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Status: Open  (was: Patch Available)

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-30 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Status: Patch Available  (was: Open)

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, HADOOP-8847.branch-1-win.3.patch, 
 test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-30 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Attachment: HADOOP-8847.branch-1-win.3.patch

Rebasing patch

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, HADOOP-8847.branch-1-win.3.patch, 
 test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13485975#comment-13485975
 ] 

Bikas Saha commented on HADOOP-8847:


Ping

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8874) HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled

2012-10-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13478965#comment-13478965
 ] 

Bikas Saha commented on HADOOP-8874:


Looks good. +1. Some minor comments.

This looks like its needed for the case when TT hadoop home is defined with -D 
instead of using an env var. Right? If so, a comment would help.
{code}
+try {
+  env.put(HADOOP_HOME_DIR, Shell.getHadoopHome());
+} catch (IOException ioe) {
+  LOG.warn(Failed to propagate HADOOP_HOME_DIR to child ENV  + ioe);
+}
{code}

The name getQualifiedBinPath() does not show its related to qualifying wrt 
hadoop home, IMO.


 HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly 
 handled
 

 Key: HADOOP-8874
 URL: https://issues.apache.org/jira/browse/HADOOP-8874
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts, security
Affects Versions: 1-win
 Environment: Called from external process with -D flag vs HADOOP_HOME 
 set.
Reporter: John Gordon
  Labels: security
 Fix For: 1-win

 Attachments: fix_home_np.patch


 There is a -D flag to set hadoop.home, which is specified in the hadoop 
 wrapper scripts.  This is particularly useful if you want SxS execution of 
 two or more versions of hadoop (e.g. rolling upgrade).  However, it isn't 
 honored at all.  HADOOP_HOME is used in 3-4 places to find non-java hadoop 
 components such as schedulers, scripts, shared libraries, or with the Windows 
 changes -- binaries.
 Ideally, these should all resolve the path in a consistent manner, and 
 callers shuold have a similar onus applied when trying to resolve an invalid 
 path to their components.  This is particularly relevant to scripts or 
 binaries that may have security impact, as absolute path resolution is 
 generally safer and more stable than relative path resolution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8874) HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled

2012-10-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479193#comment-13479193
 ] 

Bikas Saha commented on HADOOP-8874:


Also, its common to name the patch file as JIRA.branch.iteration_number

 HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly 
 handled
 

 Key: HADOOP-8874
 URL: https://issues.apache.org/jira/browse/HADOOP-8874
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts, security
Affects Versions: 1-win
 Environment: Called from external process with -D flag vs HADOOP_HOME 
 set.
Reporter: John Gordon
  Labels: security
 Fix For: 1-win

 Attachments: fix_home_np.patch


 There is a -D flag to set hadoop.home, which is specified in the hadoop 
 wrapper scripts.  This is particularly useful if you want SxS execution of 
 two or more versions of hadoop (e.g. rolling upgrade).  However, it isn't 
 honored at all.  HADOOP_HOME is used in 3-4 places to find non-java hadoop 
 components such as schedulers, scripts, shared libraries, or with the Windows 
 changes -- binaries.
 Ideally, these should all resolve the path in a consistent manner, and 
 callers shuold have a similar onus applied when trying to resolve an invalid 
 path to their components.  This is particularly relevant to scripts or 
 binaries that may have security impact, as absolute path resolution is 
 generally safer and more stable than relative path resolution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-18 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13479204#comment-13479204
 ] 

Bikas Saha commented on HADOOP-8847:


I havent used any try finally blocks because I want untar to fail if there is 
any error on any operation.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475267#comment-13475267
 ] 

Bikas Saha commented on HADOOP-8847:


Is there anything else left for me to do wrt getting this patch ready for 
commit?

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8868) FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13475279#comment-13475279
 ] 

Bikas Saha commented on HADOOP-8868:


Ok. This also avoids cases where a path is composed of a root path from 
config/default which might contain / and a subpath from locaFS that contains a 
\.
+1

 FileUtil#chmod should normalize the path before calling into shell APIs
 ---

 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8868.branch-1-win.chmod.patch


 We have seen cases where paths passed in from FileUtil#chmod to Shell APIs 
 can contain both forward and backward slashes on Windows.
 This causes problems, since some Windows APIs do not work well with mixed 
 slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8420) saveVersions.sh not working on Windows

2012-10-11 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8420:
---

Attachment: HADOOP-8420.1.patch

 saveVersions.sh not working on Windows
 --

 Key: HADOOP-8420
 URL: https://issues.apache.org/jira/browse/HADOOP-8420
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
 Attachments: HADOOP-8420.1.patch


 This script is executed during build time to generate version number 
 information for Hadoop core. This version number is consumed via API's by 
 Hive etc to determine compatibility with Hadoop versions. Currently, because 
 of dependency on awk, cut etc utilities, this script does not run 
 successfully and version information is not available.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8420) saveVersions.sh not working on Windows

2012-10-11 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8420:
---

Status: Patch Available  (was: Open)

 saveVersions.sh not working on Windows
 --

 Key: HADOOP-8420
 URL: https://issues.apache.org/jira/browse/HADOOP-8420
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
 Attachments: HADOOP-8420.1.patch


 This script is executed during build time to generate version number 
 information for Hadoop core. This version number is consumed via API's by 
 Hive etc to determine compatibility with Hadoop versions. Currently, because 
 of dependency on awk, cut etc utilities, this script does not run 
 successfully and version information is not available.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8420) saveVersions.sh not working on Windows

2012-10-11 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8420:
---

Assignee: Bikas Saha

 saveVersions.sh not working on Windows
 --

 Key: HADOOP-8420
 URL: https://issues.apache.org/jira/browse/HADOOP-8420
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8420.1.patch


 This script is executed during build time to generate version number 
 information for Hadoop core. This version number is consumed via API's by 
 Hive etc to determine compatibility with Hadoop versions. Currently, because 
 of dependency on awk, cut etc utilities, this script does not run 
 successfully and version information is not available.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8420) saveVersions.sh not working on Windows

2012-10-11 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474703#comment-13474703
 ] 

Bikas Saha commented on HADOOP-8420:


Attaching a version of saveVersion thats written in python can be used cross 
platform. This introduces a build dependence on Python 2.7.
Matt Foley helped write most of the python code.

 saveVersions.sh not working on Windows
 --

 Key: HADOOP-8420
 URL: https://issues.apache.org/jira/browse/HADOOP-8420
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8420.1.patch


 This script is executed during build time to generate version number 
 information for Hadoop core. This version number is consumed via API's by 
 Hive etc to determine compatibility with Hadoop versions. Currently, because 
 of dependency on awk, cut etc utilities, this script does not run 
 successfully and version information is not available.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8908) 'winutils.exe' code refactory

2012-10-09 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13472938#comment-13472938
 ] 

Bikas Saha commented on HADOOP-8908:


Looks like a simple renaming of files refactor to create libwinutils and 
changing references to it. +1

 'winutils.exe' code refactory 
 --

 Key: HADOOP-8908
 URL: https://issues.apache.org/jira/browse/HADOOP-8908
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-8908-branch-1-win.patch


 We want to split the existing 'winutil.exe' code into a library project and 
 an executable project. The library project will generate a static library, 
 which will be able to be linked and used in future projects. It is also good 
 software engineering practice to introduce the modularity into the project.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9

2012-10-09 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13472947#comment-13472947
 ] 

Bikas Saha commented on HADOOP-8869:


Looks like a simple fix. +1

 Links at the bottom of the jobdetails page do not render correctly in IE9
 -

 Key: HADOOP-8869
 URL: https://issues.apache.org/jira/browse/HADOOP-8869
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: Fixed_IE_Chrome_FF.png, 
 HADOOP-8869.branch-1-win.ie_links.patch, IE9.png, OtherBrowsers.png


 See attached screen shoots IE9.png vs OtherBrowsers.png

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8868) FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-09 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13472952#comment-13472952
 ] 

Bikas Saha commented on HADOOP-8868:


So we are using JAVA API to resolve the path to a normalized form? Ideally the 
FileUtil method could take File arguments instead of strings but we'd like to 
avoid changing the public API.
In what cases can we get a mix of slashes on the string path?

 FileUtil#chmod should normalize the path before calling into shell APIs
 ---

 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8868.branch-1-win.chmod.patch


 We have seen cases where paths passed in from FileUtil#chmod to Shell APIs 
 can contain both forward and backward slashes on Windows.
 This causes problems, since some Windows APIs do not work well with mixed 
 slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-07 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Attachment: HADOOP-8847.branch-1-win.2.patch

Attaching patch that restores previous behavior for non Windows. The 
test-untar* files need to be added to src/test/org/apache/hadoop/fs/ to 
complete the patch and commit it.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, 
 HADOOP-8847.branch-1-win.2.patch, test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468361#comment-13468361
 ] 

Bikas Saha commented on HADOOP-8847:


Java File.setExecutable/File.setWritable dont work as expected on Windows. In 
any case, the distributed cache explicitly sets permissions on untar'd files 
after expanding archives. So there should be no problem. Here is the code 
snippet from TrackerDistributedCacheManager.downloadCacheObject(). See end of 
snippet.
{code}
if (isArchive) {
  String tmpArchive = workFile.getName().toLowerCase();
  File srcFile = new File(workFile.toString());
  File destDir = new File(workDir.toString());
  LOG.info(String.format(Extracting %s to %s,
   srcFile.toString(), destDir.toString()));
  if (tmpArchive.endsWith(.jar)) {
RunJar.unJar(srcFile, destDir);
  } else if (tmpArchive.endsWith(.zip)) {
FileUtil.unZip(srcFile, destDir);
  } else if (isTarFile(tmpArchive)) {
FileUtil.unTar(srcFile, destDir);
  } else {
LOG.warn(String.format(
Cache file %s specified as archive, but not valid extension.,
srcFile.toString()));
// else will not do anyhting
// and copy the file into the dir as it is
  }
  FileUtil.chmod(destDir.toString(), ugo+rx, true);
}
{code}
If you are really worried about this change then I could continue to use 
existing spawn tar impl for Linux.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13468675#comment-13468675
 ] 

Bikas Saha commented on HADOOP-8847:


The true argument at the end is for recursive descent. So everything gets 
chmod'd.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8879:
--

 Summary: TestUserGroupInformation fails on Windows when runas 
Administrator
 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha


User name is case insensitive on Windows and whoami returns administrator 
instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8879:
---

Priority: Minor  (was: Major)

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor

 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8879) TestUserGroupInformation fails on Windows when runas Administrator

2012-10-03 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8879:
---

Attachment: HADOOP-8879.branch-1-win.1.patch

Attaching quick fix by normalizing user name lower case for Windows.

 TestUserGroupInformation fails on Windows when runas Administrator
 --

 Key: HADOOP-8879
 URL: https://issues.apache.org/jira/browse/HADOOP-8879
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8879.branch-1-win.1.patch


 User name is case insensitive on Windows and whoami returns administrator 
 instead of Administrator causing test assertion to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-10-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13469164#comment-13469164
 ] 

Bikas Saha commented on HADOOP-8847:


I am going to restore old behavior for non Windows case. I dont see much value 
in re-implementing all idiosyncrasies and features of unix tar in Java. The 
Java implementation was to give a good enough implementation that meets the use 
case of users submitting tar archives for distributed cache files. There are 
some tests which validate this functionality and these were failing on Windows. 
Most likely users will submit simple tar files to distributed cache. Windows 
users will likely submit zip files and not tar files since tar is not native to 
windows. Hence I will re-submit the patch which uses Java code only on Windows.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-26 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463957#comment-13463957
 ] 

Bikas Saha commented on HADOOP-8847:


I will try to add a long file name into the tar file and check that.
From what I have seen, callers who need specific permissions have to set them 
after unTar because tar does not do it for them. I see the code in distributed 
shell explicitly set permissions after the untar operation.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-25 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8847:
--

 Summary: Change untar to use Java API instead of spawning tar 
process
 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha


Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
present on all platforms by default eg. Windows. So changing this to use JAVA 
API's would help make it more cross-platform. FileUtil.unZip() uses the same 
approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-25 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Attachment: test-untar.tgz
test-untar.tar

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: test-untar.tar, test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-25 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Attachment: HADOOP-8847.branch-1-win.1.patch

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-25 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8847:
---

Status: Patch Available  (was: Open)

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8847) Change untar to use Java API instead of spawning tar process

2012-09-25 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13463446#comment-13463446
 ] 

Bikas Saha commented on HADOOP-8847:


Attaching patch that uses JAVA apache commons api to do untar. Adds a test that 
does a sanity check. 
I have written a manual test that untars a tar file 10 times using old and new 
methods and both have similar perf. Not adding that test because it takes a 
long time to run.
Attaching 2 test resource files. The patch file has the locations where these 
resources need to be committed.

 Change untar to use Java API instead of spawning tar process
 

 Key: HADOOP-8847
 URL: https://issues.apache.org/jira/browse/HADOOP-8847
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8847.branch-1-win.1.patch, test-untar.tar, 
 test-untar.tgz


 Currently FileUtil.unTar() spawns tar utility to do the work. Tar may not be 
 present on all platforms by default eg. Windows. So changing this to use JAVA 
 API's would help make it more cross-platform. FileUtil.unZip() uses the same 
 approach.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8836) UGI should throw exception in case winutils.exe cannot be loaded

2012-09-24 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8836:
--

 Summary: UGI should throw exception in case winutils.exe cannot be 
loaded
 Key: HADOOP-8836
 URL: https://issues.apache.org/jira/browse/HADOOP-8836
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha


In upstream projects like Hive its hard to see why getting user group 
information failed because the API swallows the exception. One of the cases is 
when winutils is not present where Hadoop expects it to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8836) UGI should throw exception in case winutils.exe cannot be loaded

2012-09-24 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8836:
---

Attachment: HADOOP-8836.branch-1-win.1.patch

 UGI should throw exception in case winutils.exe cannot be loaded
 

 Key: HADOOP-8836
 URL: https://issues.apache.org/jira/browse/HADOOP-8836
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8836.branch-1-win.1.patch


 In upstream projects like Hive its hard to see why getting user group 
 information failed because the API swallows the exception. One of the cases 
 is when winutils is not present where Hadoop expects it to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8836) UGI should throw exception in case winutils.exe cannot be loaded

2012-09-24 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8836:
---

Status: Patch Available  (was: Open)

 UGI should throw exception in case winutils.exe cannot be loaded
 

 Key: HADOOP-8836
 URL: https://issues.apache.org/jira/browse/HADOOP-8836
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8836.branch-1-win.1.patch


 In upstream projects like Hive its hard to see why getting user group 
 information failed because the API swallows the exception. One of the cases 
 is when winutils is not present where Hadoop expects it to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8836) UGI should throw exception in case winutils.exe cannot be loaded

2012-09-24 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8836:
---

Priority: Minor  (was: Major)

 UGI should throw exception in case winutils.exe cannot be loaded
 

 Key: HADOOP-8836
 URL: https://issues.apache.org/jira/browse/HADOOP-8836
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
Priority: Minor
 Attachments: HADOOP-8836.branch-1-win.1.patch


 In upstream projects like Hive its hard to see why getting user group 
 information failed because the API swallows the exception. One of the cases 
 is when winutils is not present where Hadoop expects it to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8250) Investigate uses of FileUtil and functional correctness based on current use cases

2012-09-17 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha resolved HADOOP-8250.


Resolution: Won't Fix

Multiple other jiras have superceded this.

 Investigate uses of FileUtil and functional correctness based on current use 
 cases
 --

 Key: HADOOP-8250
 URL: https://issues.apache.org/jira/browse/HADOOP-8250
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 1.1.0
Reporter: Bikas Saha
Assignee: Bikas Saha

 The current Windows patch replaces symlink with copy. This jira tracks 
 understanding the implications of this change and others like it on expected 
 functionality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8731) Public distributed cache support for Windows

2012-09-14 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456270#comment-13456270
 ] 

Bikas Saha commented on HADOOP-8731:


Ivan, I think Vinod means the following.
In a real distributed cluster where filesystem is HDFS, then the current 
methods work because dist cache files are on hdfs and hdfs permissions resemble 
posix. So isPublic() etc is called on HDFS filesystem. So things will work.
When running using the local file system, say in the LocalJobRunner scenario, 
dist cache files are on local fs. In that case the current methods like 
isPublic() do not work on Windows because of the reasons who mentioned. This is 
what is happening in the test. 


 Public distributed cache support for Windows
 

 Key: HADOOP-8731
 URL: https://issues.apache.org/jira/browse/HADOOP-8731
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8731-PublicCache.patch


 A distributed cache file is considered public (sharable between MR jobs) if 
 OTHER has read permissions on the file and +x permissions all the way up in 
 the folder hierarchy. By default, Windows permissions are mapped to 700 all 
 the way up to the drive letter, and it is unreasonable to ask users to change 
 the permission on the whole drive to make the file public. IOW, it is hardly 
 possible to have public distributed cache on Windows. 
 To enable the scenario and make it more Windows friendly, the criteria on 
 when a file is considered public should be relaxed. One proposal is to check 
 whether the user has given EVERYONE group permission on the file only (and 
 discard the +x check on parent folders).
 Security considerations for the proposal: Default permissions on Unix 
 platforms are usually 775 or 755 meaning that OTHER users can read and 
 list folders by default. What this also means is that Hadoop users have to 
 explicitly make the files private in order to make them private in the 
 cluster (please correct me if this is not the case in real life!). On 
 Windows, default permissions are 700. This means that by default all files 
 are private. In the new model, if users want to make them public, they have 
 to explicitly add EVERYONE group permissions on the file. 
 TestTrackerDistributedCacheManager fails because of this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-14 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13456271#comment-13456271
 ] 

Bikas Saha commented on HADOOP-8733:


Actually, the TT is informed of the spawned process identifier by the child 
process (via TaskUmbilicalProtocol.getTask()). On Linux, this is the Linux OS 
pid set via the shell script. On Windows it is the job object identifier set by 
the TT (currently set to task attempt id). The child obtains the value from ENV 
variable.
The MXBean code is a fallback to get the process OS pid in case the ENV var is 
not set. This works cross platform on most JDK's. This is relevant in Windows 
only when we dont use job objects for spawning processes (ie directly spawning 
Windows processes).

 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.2.patch, 
 HADOOP-8733-scripts.2.patch, HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8731) Public distributed cache support for Windows

2012-09-13 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455024#comment-13455024
 ] 

Bikas Saha commented on HADOOP-8731:


Looks like the chmod fixes an existing generic bug.

Can you please clarify the following scenario so that other folks reading this 
thread have it easy?
Directory A (perm for user Foo) contains directory B (perm for Everyone)
So contents of A will be private cache and contents of B will be public cache 
on Windows but not on Linux.


 Public distributed cache support for Windows
 

 Key: HADOOP-8731
 URL: https://issues.apache.org/jira/browse/HADOOP-8731
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8731-PublicCache.patch


 A distributed cache file is considered public (sharable between MR jobs) if 
 OTHER has read permissions on the file and +x permissions all the way up in 
 the folder hierarchy. By default, Windows permissions are mapped to 700 all 
 the way up to the drive letter, and it is unreasonable to ask users to change 
 the permission on the whole drive to make the file public. IOW, it is hardly 
 possible to have public distributed cache on Windows. 
 To enable the scenario and make it more Windows friendly, the criteria on 
 when a file is considered public should be relaxed. One proposal is to check 
 whether the user has given EVERYONE group permission on the file only (and 
 discard the +x check on parent folders).
 Security considerations for the proposal: Default permissions on Unix 
 platforms are usually 775 or 755 meaning that OTHER users can read and 
 list folders by default. What this also means is that Hadoop users have to 
 explicitly make the files private in order to make them private in the 
 cluster (please correct me if this is not the case in real life!). On 
 Windows, default permissions are 700. This means that by default all files 
 are private. In the new model, if users want to make them public, they have 
 to explicitly add EVERYONE group permissions on the file. 
 TestTrackerDistributedCacheManager fails because of this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-09-13 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13455043#comment-13455043
 ] 

Bikas Saha commented on HADOOP-8734:


Sorry. I got totally confused and misread the test file name in the patch. +1. 
Thanks!

 LocalJobRunner does not support private distributed cache
 -

 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8734-LocalJobRunner.patch


 It seems that LocalJobRunner does not support private distributed cache. The 
 issue is more visible on Windows as all DC files are private by default (see 
 HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8731) Public distributed cache support for Windows

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454240#comment-13454240
 ] 

Bikas Saha commented on HADOOP-8731:


can you please explain the chmod() changes in TrackerDistributedCacheManager.

 Public distributed cache support for Windows
 

 Key: HADOOP-8731
 URL: https://issues.apache.org/jira/browse/HADOOP-8731
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8731-PublicCache.patch


 A distributed cache file is considered public (sharable between MR jobs) if 
 OTHER has read permissions on the file and +x permissions all the way up in 
 the folder hierarchy. By default, Windows permissions are mapped to 700 all 
 the way up to the drive letter, and it is unreasonable to ask users to change 
 the permission on the whole drive to make the file public. IOW, it is hardly 
 possible to have public distributed cache on Windows. 
 To enable the scenario and make it more Windows friendly, the criteria on 
 when a file is considered public should be relaxed. One proposal is to check 
 whether the user has given EVERYONE group permission on the file only (and 
 discard the +x check on parent folders).
 Security considerations for the proposal: Default permissions on Unix 
 platforms are usually 775 or 755 meaning that OTHER users can read and 
 list folders by default. What this also means is that Hadoop users have to 
 explicitly make the files private in order to make them private in the 
 cluster (please correct me if this is not the case in real life!). On 
 Windows, default permissions are 700. This means that by default all files 
 are private. In the new model, if users want to make them public, they have 
 to explicitly add EVERYONE group permissions on the file. 
 TestTrackerDistributedCacheManager fails because of this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454326#comment-13454326
 ] 

Bikas Saha commented on HADOOP-8734:


bq. Check out the fix I did to TestMRWithDistributedCache, this is an E2E use 
case.
What fix are you mentioning?

 LocalJobRunner does not support private distributed cache
 -

 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8734-LocalJobRunner.patch


 It seems that LocalJobRunner does not support private distributed cache. The 
 issue is more visible on Windows as all DC files are private by default (see 
 HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454539#comment-13454539
 ] 

Bikas Saha commented on HADOOP-8734:


So if I understand this right, this fixes a generic deficiency in 
LocalJobRunner which wasnt showing up because by default files are public to 
read on Linux FS and so LocalJobRunner would not see issues in accessing 
private distributed cache from the local FS.
Also, this would make the change to TestMRWithDistributedCache unnecessary?

 LocalJobRunner does not support private distributed cache
 -

 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8734-LocalJobRunner.patch


 It seems that LocalJobRunner does not support private distributed cache. The 
 issue is more visible on Windows as all DC files are private by default (see 
 HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8763) Set group owner on Windows failed

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454553#comment-13454553
 ] 

Bikas Saha commented on HADOOP-8763:


The following code seems to be an unrelated change? Also, do you mean BUILDIN 
or BUILTIN?
{code}
+  // Empty name is invalid. However, LookupAccountName() function will return a
+  // false Sid, i.e. Sid for 'BUILDIN', for an empty name instead failing. We
+  // report the error before calling LookupAccountName() function for this
+  // special case.
+  //
+  if (wcslen(acctName) == 0)
+return FALSE;
{code}

Do you see any unexpected behavior for users because of the following?
{code}
+On Linux, if a colon but no group name follows the user name, the group of\n\
+the files is changed to that user\'s login group. Windows has no concept of\n\
+a user's login group. So we do not change the group owner in this case.\n,
 program)
{code}

 Set group owner on Windows failed
 -

 Key: HADOOP-8763
 URL: https://issues.apache.org/jira/browse/HADOOP-8763
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-8763-branch-1-win-2.patch, 
 HADOOP-8763-branch-1-win.patch


 RawLocalFileSystem.setOwner() method may incorrectly set the group owner of a 
 file on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8694) Create true symbolic links on Windows

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454560#comment-13454560
 ] 

Bikas Saha commented on HADOOP-8694:


+1 looks good.

 Create true symbolic links on Windows
 -

 Key: HADOOP-8694
 URL: https://issues.apache.org/jira/browse/HADOOP-8694
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-8694-branch-1-win-2.patch, 
 HADOOP-8694-branch-1-win.patch, secpol.png


 In branch-1-win, we currently copy files for symbolic links in Hadoop on 
 Windows. We have talked to [~davidlao] who made the original fix, and did 
 some investigation on Windows. Windows began to support symbolic links 
 (symlinks) since Vista/Server 2008. The original reason to copy files instead 
 of creating actual symlinks is that only Administrators has the privilege to 
 create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
 the reason for that is mostly due to security, and this default behavior may 
 not be changed in near future. Though this behavior can be changed via  the 
 Local Security Policy management console, i.e. secpol.msc, under Security 
 Settings\Local Policies\User Rights Assignment\Create symbolic links.
  
 In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
 logs. We felt the usages are important enough for us to provide true symlinks 
 support, and users need to have the symlink creation privilege enabled on 
 Windows to use Hadoop.
 This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8694) Create true symbolic links on Windows

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13454562#comment-13454562
 ] 

Bikas Saha commented on HADOOP-8694:


The vcproj changes look like all lines have edits? Is it some line ending 
issue? Could you run this through dos2unix?

 Create true symbolic links on Windows
 -

 Key: HADOOP-8694
 URL: https://issues.apache.org/jira/browse/HADOOP-8694
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-8694-branch-1-win-2.patch, 
 HADOOP-8694-branch-1-win.patch, secpol.png


 In branch-1-win, we currently copy files for symbolic links in Hadoop on 
 Windows. We have talked to [~davidlao] who made the original fix, and did 
 some investigation on Windows. Windows began to support symbolic links 
 (symlinks) since Vista/Server 2008. The original reason to copy files instead 
 of creating actual symlinks is that only Administrators has the privilege to 
 create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
 the reason for that is mostly due to security, and this default behavior may 
 not be changed in near future. Though this behavior can be changed via  the 
 Local Security Policy management console, i.e. secpol.msc, under Security 
 Settings\Local Policies\User Rights Assignment\Create symbolic links.
  
 In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
 logs. We felt the usages are important enough for us to provide true symlinks 
 support, and users need to have the symlink creation privilege enabled on 
 Windows to use Hadoop.
 This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8763) Set group owner on Windows failed

2012-09-10 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13452154#comment-13452154
 ] 

Bikas Saha commented on HADOOP-8763:


What is the problem this patch is trying to solve? An example would be good.

 Set group owner on Windows failed
 -

 Key: HADOOP-8763
 URL: https://issues.apache.org/jira/browse/HADOOP-8763
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-8763-branch-1-win-2.patch, 
 HADOOP-8763-branch-1-win.patch


 RawLocalFileSystem.setOwner() method may incorrectly set the group owner of a 
 file on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8536) Problem with -Dproperty=value option on windows hadoop

2012-09-06 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha resolved HADOOP-8536.


Resolution: Duplicate

Dup of HADOOP-8739 

 Problem with -Dproperty=value option on windows hadoop
 --

 Key: HADOOP-8536
 URL: https://issues.apache.org/jira/browse/HADOOP-8536
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Trupti Dhavle

 While running the java examples the -Dproperty=value option to the hadoop 
 command is not getting read correctly.
 TERASORT COMMAND: 
 C:\hdp\branch-1-win\bin\hadoop   jar 
 C:\hdp\branch-1-win\build\hadoop-examples-1.1.0-SNAPSHOT.jar terasort  
 -Dmapreduce.reduce.input.limit=-1 teraInputDir teraOutputDir  
 Error-
 12/06/27 10:28:26 INFO terasort.TeraSort: starting
 org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: 
 hdfs://localhost:8020/user/Administrator/-1
 It tries to look into directory named -1 instead of teraInputDir
 On setting echo on in the cmd scripts, I noticed that the = sign 
 disappears in the command passed to JVM-
 terasort -Dmapreduce.reduce.input.limit -1 teraInputDir teraOutputDir 
 In order to make it read properly quotes around “-Dproperty=value” are 
 required to be used.
 This JIRA is to track fixing this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-05 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13449288#comment-13449288
 ] 

Bikas Saha commented on HADOOP-8733:


LTC does not run on anything other than Linux.

 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444302#comment-13444302
 ] 

Bikas Saha commented on HADOOP-8457:


I am +1 on this. Sanjay are you ok with going forward on this?

 Address file ownership issue for users in Administrators group on Windows.
 --

 Key: HADOOP-8457
 URL: https://issues.apache.org/jira/browse/HADOOP-8457
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.0, 0.24.0
Reporter: Chuan Liu
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
 HADOOP-8457-branch-1-win_Admins.patch


 On Linux, the initial file owners are the creators. (I think this is true in 
 general. If there are exceptions, please let me know.) On Windows, the file 
 created by a user in the Administrators group has the initial owner 
 ‘Administrators’, i.e. the the Administrators group is the initial owner of 
 the file. As a result, this leads to an exception when we check file 
 ownership in SecureIOUtils .checkStat() method. As a result, this method is 
 disabled right now. We need to address this problem and enable the method on 
 Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8732) Address intermittent test failures on Windows

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444309#comment-13444309
 ] 

Bikas Saha commented on HADOOP-8732:


+1. After this fix I dont see the intermittent failures after multiple runs.

 Address intermittent test failures on Windows
 -

 Key: HADOOP-8732
 URL: https://issues.apache.org/jira/browse/HADOOP-8732
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8732-IntermittentFailures.patch


 There are a few tests that fail intermittently on Windows with a timeout 
 error. This means that the test was actually killed from the outside, and it 
 would continue to run otherwise. 
 The following are examples of such tests (there might be others):
  - TestJobInProgress (this issue reproes pretty consistently in Eclipse on 
 this one)
  - TestControlledMapReduceJob
  - TestServiceLevelAuthorization

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444315#comment-13444315
 ] 

Bikas Saha commented on HADOOP-8733:


+1 with a minor comment.

In MAPREDUCE-4510 I added a Shell.LINUX
Makes sense to run LTC test when Shell.LINUX instead of when !Shell.WINDOWS? I 
think it reads better.


 TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
 on Windows
 ---

 Key: HADOOP-8733
 URL: https://issues.apache.org/jira/browse/HADOOP-8733
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8733-scripts.patch


 Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-08-29 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13444554#comment-13444554
 ] 

Bikas Saha commented on HADOOP-8734:


Can you please elaborate on the cause and the fix? Thanks!

 LocalJobRunner does not support private distributed cache
 -

 Key: HADOOP-8734
 URL: https://issues.apache.org/jira/browse/HADOOP-8734
 Project: Hadoop Common
  Issue Type: Bug
  Components: filecache
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8734-LocalJobRunner.patch


 It seems that LocalJobRunner does not support private distributed cache. The 
 issue is more visible on Windows as all DC files are private by default (see 
 HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8739) Cmd scripts for Windows have issues in argument parsing

2012-08-28 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443403#comment-13443403
 ] 

Bikas Saha commented on HADOOP-8739:


c:\ hadoop dfs -ls /
Found 2 items
drwxrwxrwx   - Administrator supergroup  0 2012-07-06 15:00 /tmp
drwxr-xr-x   - Administrator supergroup  0 2012-07-06 18:52 /user

c:\ hadoop dfs -rmr /tmp/*
Usage: java FsShell [-rmr [-skipTrash] src ]

This would end up resolving the * with the local filesystem and fail.

 Cmd scripts for Windows have issues in argument parsing
 ---

 Key: HADOOP-8739
 URL: https://issues.apache.org/jira/browse/HADOOP-8739
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8739.branch-1-win.1.patch


 The parsing of the arguments has a bug in the way its broken down and this 
 break things such at handling globbing (* char).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8739) Cmd scripts for Windows have issues in argument parsing

2012-08-28 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443404#comment-13443404
 ] 

Bikas Saha commented on HADOOP-8739:


Thanks for the reviews!

 Cmd scripts for Windows have issues in argument parsing
 ---

 Key: HADOOP-8739
 URL: https://issues.apache.org/jira/browse/HADOOP-8739
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8739.branch-1-win.1.patch


 The parsing of the arguments has a bug in the way its broken down and this 
 break things such at handling globbing (* char).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8657:
---

Attachment: HADOOP-8657.branch-1-win.1.patch

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha updated HADOOP-8657:
---

Status: Patch Available  (was: Open)

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8657) TestCLI fails on Windows because it uses hardcoded file length of test files committed to the source code

2012-08-27 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442823#comment-13442823
 ] 

Bikas Saha commented on HADOOP-8657:


The test was failing because it was checking for file sizes on disk and the 
sizes were hardcoded in the test. text file sizes can be different on different 
platforms based on character encodings etc. The fix was to read the actual file 
size from disk and then check values based on that instead of some hardcoded 
value. The test files are actually checked into the source as resources. 
Ideally, the test would generate these files on the fly instead of checking 
them in but I am leaving that re-organization of the code tree for later when 
the branch is merged back so as to simplify the merge.

 TestCLI fails on Windows because it uses hardcoded file length of test files 
 committed to the source code
 -

 Key: HADOOP-8657
 URL: https://issues.apache.org/jira/browse/HADOOP-8657
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha
 Attachments: HADOOP-8657.branch-1-win.1.patch


 The actual length of the file would depend on the character encoding used and 
 hence cannot be hard-coded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >