[jira] [Created] (HADOOP-12073) Azure FileSystem PageBlobInputStream does not return -1 on EOF
Ivan Mitic created HADOOP-12073: --- Summary: Azure FileSystem PageBlobInputStream does not return -1 on EOF Key: HADOOP-12073 URL: https://issues.apache.org/jira/browse/HADOOP-12073 Project: Hadoop Common Issue Type: Bug Components: tools Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Azure FileSystem PageBlobInputStream does not return -1 on EOF. This is some scenarios causes infinite hands on reading files (e.g. copyToLocal can hang forever). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-12033) Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect
Ivan Mitic created HADOOP-12033: --- Summary: Reducer task failure with java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect Key: HADOOP-12033 URL: https://issues.apache.org/jira/browse/HADOOP-12033 Project: Hadoop Common Issue Type: Bug Reporter: Ivan Mitic We have noticed intermittent reducer task failures with the below exception: {code} Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#9 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.NoClassDefFoundError: Ljava/lang/InternalError at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompressBytesDirect(Native Method) at org.apache.hadoop.io.compress.snappy.SnappyDecompressor.decompress(SnappyDecompressor.java:239) at org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:88) at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.shuffle(InMemoryMapOutput.java:97) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:534) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193) Caused by: java.lang.ClassNotFoundException: Ljava.lang.InternalError at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 9 more {code} Usually, the reduce task succeeds on retry. Some of the symptoms are similar to HADOOP-8423, but this fix is already included (this is on Hadoop 2.6). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11959) WASB should configure client side socket timeout in storage client blob request options
Ivan Mitic created HADOOP-11959: --- Summary: WASB should configure client side socket timeout in storage client blob request options Key: HADOOP-11959 URL: https://issues.apache.org/jira/browse/HADOOP-11959 Project: Hadoop Common Issue Type: Bug Components: tools Reporter: Ivan Mitic Assignee: Ivan Mitic On clusters/jobs where {{mapred.task.timeout}} is set to a larger value, we noticed that tasks can sometimes get stuck on the below stack. {code} Thread 1: (state = IN_NATIVE) - java.net.SocketInputStream.socketRead0(java.io.FileDescriptor, byte[], int, int, int) @bci=0 (Interpreted frame) - java.net.SocketInputStream.read(byte[], int, int, int) @bci=87, line=152 (Interpreted frame) - java.net.SocketInputStream.read(byte[], int, int) @bci=11, line=122 (Interpreted frame) - java.io.BufferedInputStream.fill() @bci=175, line=235 (Interpreted frame) - java.io.BufferedInputStream.read1(byte[], int, int) @bci=44, line=275 (Interpreted frame) - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 (Interpreted frame) - sun.net.www.MeteredStream.read(byte[], int, int) @bci=16, line=134 (Interpreted frame) - java.io.FilterInputStream.read(byte[], int, int) @bci=7, line=133 (Interpreted frame) - sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(byte[], int, int) @bci=4, line=3053 (Interpreted frame) - com.microsoft.azure.storage.core.NetworkInputStream.read(byte[], int, int) @bci=7, line=49 (Interpreted frame) - com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection, com.microsoft.azure.storage.blob.CloudBlob, com.microsoft.azure .storage.blob.CloudBlobClient, com.microsoft.azure.storage.OperationContext, java.lang.Integer) @bci=204, line=1691 (Interpreted frame) - com.microsoft.azure.storage.blob.CloudBlob$10.postProcessResponse(java.net.HttpURLConnection, java.lang.Object, java.lang.Object, com.microsoft.azure.storage .OperationContext, java.lang.Object) @bci=17, line=1613 (Interpreted frame) - com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(java.lang.Object, java.lang.Object, com.microsoft.azure.storage.core.StorageRequest, com.mi crosoft.azure.storage.RetryPolicyFactory, com.microsoft.azure.storage.OperationContext) @bci=352, line=148 (Interpreted frame) - com.microsoft.azure.storage.blob.CloudBlob.downloadRangeInternal(long, java.lang.Long, byte[], int, com.microsoft.azure.storage.AccessCondition, com.microsof t.azure.storage.blob.BlobRequestOptions, com.microsoft.azure.storage.OperationContext) @bci=131, line=1468 (Interpreted frame) - com.microsoft.azure.storage.blob.BlobInputStream.dispatchRead(int) @bci=31, line=255 (Interpreted frame) - com.microsoft.azure.storage.blob.BlobInputStream.readInternal(byte[], int, int) @bci=52, line=448 (Interpreted frame) - com.microsoft.azure.storage.blob.BlobInputStream.read(byte[], int, int) @bci=28, line=420 (Interpreted frame) - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 (Interpreted frame) - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 (Interpreted frame) - java.io.DataInputStream.read(byte[], int, int) @bci=7, line=149 (Interpreted frame) - org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsInputStream.read(byte[], int, int) @bci=10, line=734 (Interpreted frame) - java.io.BufferedInputStream.read1(byte[], int, int) @bci=39, line=273 (Interpreted frame) - java.io.BufferedInputStream.read(byte[], int, int) @bci=49, line=334 (Interpreted frame) - java.io.DataInputStream.read(byte[]) @bci=8, line=100 (Interpreted frame) - org.apache.hadoop.util.LineReader.fillBuffer(java.io.InputStream, byte[], boolean) @bci=2, line=180 (Interpreted frame) - org.apache.hadoop.util.LineReader.readDefaultLine(org.apache.hadoop.io.Text, int, int) @bci=64, line=216 (Compiled frame) - org.apache.hadoop.util.LineReader.readLine(org.apache.hadoop.io.Text, int, int) @bci=19, line=174 (Interpreted frame) - org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue() @bci=108, line=185 (Interpreted frame) - org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue() @bci=13, line=553 (Interpreted frame) - org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue() @bci=4, line=80 (Interpreted frame) - org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue() @bci=4, line=91 (Interpreted frame) - org.apache.hadoop.mapreduce.Mapper.run(org.apache.hadoop.mapreduce.Mapper$Context) @bci=6, line=144 (Interpreted frame) - org.apache.hadoop.mapred.MapTask.runNewMapper(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.mapreduce.split.JobSplit$TaskSplitIndex, org.apache.hadoop. mapred.TaskUmbilicalProtocol, org.apache.hadoop.mapred.Task$TaskReporter) @bci=228, line=784 (Interpreted frame) - org.apache.hadoop.mapred.MapTask.run(org.apache.ha
[jira] [Created] (HADOOP-11578) Hadoop Azure file system does not track all FileSystem.Statistics
Ivan Mitic created HADOOP-11578: --- Summary: Hadoop Azure file system does not track all FileSystem.Statistics Key: HADOOP-11578 URL: https://issues.apache.org/jira/browse/HADOOP-11578 Project: Hadoop Common Issue Type: Bug Components: tools Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Just noticed that Azure file system does not implement all counters from FileSystem.Statistics. Missing counters are: Number of read operations Number of large read operations Number of write operations -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-9096) Improve performance of Windows install scripts
[ https://issues.apache.org/jira/browse/HADOOP-9096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-9096. Resolution: Won't Fix Install scripts are no longer distributed with hadoop. > Improve performance of Windows install scripts > -- > > Key: HADOOP-9096 > URL: https://issues.apache.org/jira/browse/HADOOP-9096 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic > > Things to improve for better performance: > The whole install is taking 4-5 mins on a single box because of. > 1) Inclusion of src & other temp folders (IVY etc..) in the winpkg so it is > taking longer time to decompress the zip folder > 2) Nested zip files in the winpkg. If we remove the nested zips then after > manual unpack everything will be a xcopy install and reduces install time. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10182) ZKFailoverController should implicitly ensure that ZK is formatted
Ivan Mitic created HADOOP-10182: --- Summary: ZKFailoverController should implicitly ensure that ZK is formatted Key: HADOOP-10182 URL: https://issues.apache.org/jira/browse/HADOOP-10182 Project: Hadoop Common Issue Type: Bug Components: ha Affects Versions: 2.3.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Currently, a manual ZK format step is necessary before ZKFailoverController can join leader election. This generally adds overhead on what is needed to configure an HA cluster. My proposal is to always ensure that ZK is formatted during startup of the ZKFailoverController. If folks do not want to make this a default, we can put it under a config. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HADOOP-10090) Jobtracker metrics not updated properly after execution of a mapreduce job
Ivan Mitic created HADOOP-10090: --- Summary: Jobtracker metrics not updated properly after execution of a mapreduce job Key: HADOOP-10090 URL: https://issues.apache.org/jira/browse/HADOOP-10090 Project: Hadoop Common Issue Type: Bug Components: metrics Affects Versions: 1.2.1 Reporter: Ivan Mitic Assignee: Ivan Mitic After executing a wordcount mapreduce sample job, jobtracker metrics are not updated properly. Often times the response from the jobtracker has higher number of job_completed than job_submitted (for example 8 jobs completed and 7 jobs submitted). Issue reported by Toma Paunovic. -- This message was sent by Atlassian JIRA (v6.1#6144)
[jira] [Created] (HADOOP-9970) TaskTracker hung after failed reconnect to the JobTracker
Ivan Mitic created HADOOP-9970: -- Summary: TaskTracker hung after failed reconnect to the JobTracker Key: HADOOP-9970 URL: https://issues.apache.org/jira/browse/HADOOP-9970 Project: Hadoop Common Issue Type: Bug Affects Versions: 1.3.0 Reporter: Ivan Mitic Assignee: Ivan Mitic TaskTracker hung after failed reconnect to the JobTracker. This is the problematic piece of code: {code} this.distributedCacheManager = new TrackerDistributedCacheManager( this.fConf, taskController); this.distributedCacheManager.startCleanupThread(); this.jobClient = (InterTrackerProtocol) UserGroupInformation.getLoginUser().doAs( new PrivilegedExceptionAction() { public Object run() throws IOException { return RPC.waitForProxy(InterTrackerProtocol.class, InterTrackerProtocol.versionID, jobTrackAddr, fConf); } }); {code} In case RPC.waitForProxy() throws, TrackerDistributedCacheManager cleanup thread will never be stopped, and given that it is a non daemon thread it will keep TT up forever. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9551) Backport common utils introduced with HADOOP-9413 to branch-1-win
[ https://issues.apache.org/jira/browse/HADOOP-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-9551. Resolution: Fixed Fix Version/s: 1-win > Backport common utils introduced with HADOOP-9413 to branch-1-win > - > > Key: HADOOP-9551 > URL: https://issues.apache.org/jira/browse/HADOOP-9551 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Fix For: 1-win > > Attachments: HADOOP-9551.branch-1-win.common.2.patch, > HADOOP-9551.branch-1-win.common.3.patch, > HADOOP-9551.branch-1-win.common.4.patch > > > Branch-1-win has the same set of problems described in HADOOP-9413. With this > Jira I plan to prepare a branch-1-win compatible patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9824) TestSymlinkHdfsDisable fails on Windows
Ivan Mitic created HADOOP-9824: -- Summary: TestSymlinkHdfsDisable fails on Windows Key: HADOOP-9824 URL: https://issues.apache.org/jira/browse/HADOOP-9824 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta Reporter: Ivan Mitic Assignee: Ivan Mitic {noformat} Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec <<< FAILURE! testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable) Time elapsed: 8704 sec <<< ERROR! java.lang.IllegalArgumentException: Pathname /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup is not a valid DFS filename. at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184) at org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180) at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816) at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932) at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232) at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224) at org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9791) Add a test case covering long paths for new FileUtil access check methods
Ivan Mitic created HADOOP-9791: -- Summary: Add a test case covering long paths for new FileUtil access check methods Key: HADOOP-9791 URL: https://issues.apache.org/jira/browse/HADOOP-9791 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 1-win, 2.1.0-beta Reporter: Ivan Mitic Assignee: Ivan Mitic We've seen historically that paths longer than 260 chars can cause things not to work on Windows if not properly handled. Filing a tracking Jira to add a native io test case with long paths for new FileUtil access check methods added with HADOOP-9413. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9678) TestRPC#testStopsAllThreads intermittently fails on Windows
Ivan Mitic created HADOOP-9678: -- Summary: TestRPC#testStopsAllThreads intermittently fails on Windows Key: HADOOP-9678 URL: https://issues.apache.org/jira/browse/HADOOP-9678 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta, 1.3.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Exception: {noformat} junit.framework.AssertionFailedError: null at org.apache.hadoop.ipc.TestRPC.testStopsAllThreads(TestRPC.java:440) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows
Ivan Mitic created HADOOP-9677: -- Summary: TestSetupAndCleanupFailure#testWithDFS fails on Windows Key: HADOOP-9677 URL: https://issues.apache.org/jira/browse/HADOOP-9677 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Xi Fang Exception: {noformat} junit.framework.AssertionFailedError: expected:<2> but was:<3> at org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219) at org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9552) Windows log4j template should suppress info messages from mortbay.log
[ https://issues.apache.org/jira/browse/HADOOP-9552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-9552. Resolution: Fixed Fix Version/s: 1-win I committed this to branch-1-win. Thanks Mostafa for the review! > Windows log4j template should suppress info messages from mortbay.log > - > > Key: HADOOP-9552 > URL: https://issues.apache.org/jira/browse/HADOOP-9552 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Fix For: 1-win > > Attachments: HADOOP-9552.branch-1-win.patch > > > Additional log messages on stdout: > {noformat} > C:\>c:\apps\dist\hadoop-1.1.0-SNAPSHOT\bin\hadoop dfs -ls / > 13/04/05 20:32:24 INFO mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via > org.mortbay.log.Slf4jLog > Found 1 items > drwxrwxrwx - SYSTEM supergroup 0 2013-04-03 00:35 /user > {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9579) Contrib ant test target not setting the java.library.path
[ https://issues.apache.org/jira/browse/HADOOP-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-9579. Resolution: Fixed Fix Version/s: 1-win I committed the patch to branch-1-win. Thanks Chris and Chuan for the review! > Contrib ant test target not setting the java.library.path > - > > Key: HADOOP-9579 > URL: https://issues.apache.org/jira/browse/HADOOP-9579 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Fix For: 1-win > > Attachments: HADOOP-9579.branch-1-win.2.patch, > HADOOP-9579.branch-1-win.patch > > > build-contrib.xml does not set java.library.path causing tests to run without > the hadoop native library. This is a bigger problem on Windows as having the > native library is required. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9590) Move to JDK7 improved APIs for file operations when available
Ivan Mitic created HADOOP-9590: -- Summary: Move to JDK7 improved APIs for file operations when available Key: HADOOP-9590 URL: https://issues.apache.org/jira/browse/HADOOP-9590 Project: Hadoop Common Issue Type: Improvement Reporter: Ivan Mitic JDK6 does not have a complete support for local file system file operations. Specifically: - JDK6 does not provide symlink/hardlink APIs what forced Hadoop to defer to shell based tooling - JDK6 does not return any useful error information when File#mkdir/mkdirs or File#renameTo fails making it unnecessary hard to troubleshoot some issues - JDK6 File#canRead/canWrite/canExecute do not perform any access checks on Windows making APIs inconsistent with the Unix behavior - JDK6 File#setReadable/setWritable/setExecutable do not change access rights on Windows making APIs inconsistent with the Unix behavior - JDK6 File#length does not work as expected on symlinks on Windows - JDK6 File#renameTo does not work as expected on symlinks on Windows All above resulted in Hadoop community having to fill in the gaps by providing equivalent native implementations or applying workarounds. JDK7 addressed (as far as I know) all (or most) of the above problems, either thru the newly introduced [Files|http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html] class or thru bug fixes. This is a tracking Jira to revisit above mediations once JDK7 becomes the supported platform by the Hadoop community. This work would allow significant portion of the native platform-dependent code to be replaced with Java equivalents what is goodness w.r.t. Hadoop cross-platform support. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9579) Contrib ant test target not setting the java.library.path
Ivan Mitic created HADOOP-9579: -- Summary: Contrib ant test target not setting the java.library.path Key: HADOOP-9579 URL: https://issues.apache.org/jira/browse/HADOOP-9579 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic build-contrib.xml does not set java.library.path causing tests to run without the hadoop native library. This is a bigger problem on Windows as having the native library is required. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9552) Windows log4j template should suppress info messages from mortbay.log
Ivan Mitic created HADOOP-9552: -- Summary: Windows log4j template should suppress info messages from mortbay.log Key: HADOOP-9552 URL: https://issues.apache.org/jira/browse/HADOOP-9552 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Additional log messages on stdout: {noformat} C:\>c:\apps\dist\hadoop-1.1.0-SNAPSHOT\bin\hadoop dfs -ls / 13/04/05 20:32:24 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via org.mortbay.log.Slf4jLog Found 1 items drwxrwxrwx - SYSTEM supergroup 0 2013-04-03 00:35 /user {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9551) Backport common utils introduced with HADOOP-9413 to branch-1-win
Ivan Mitic created HADOOP-9551: -- Summary: Backport common utils introduced with HADOOP-9413 to branch-1-win Key: HADOOP-9551 URL: https://issues.apache.org/jira/browse/HADOOP-9551 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Branch-1-win has the same set of problems described in HADOOP-9413. With this Jira I plan to prepare a branch-1-win compatible patch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9525) Add tests that validate winutils chmod behavior on folders
Ivan Mitic created HADOOP-9525: -- Summary: Add tests that validate winutils chmod behavior on folders Key: HADOOP-9525 URL: https://issues.apache.org/jira/browse/HADOOP-9525 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic As part of HADOOP-9413 and HDFS-4610 I realized that we don't have tests that validate the behavior of winutils chmod on folders. It would be good to add additional tests to both validate the functionality and use them as means to document some subtle differences in behavior between Unix and Windows chmod. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9490) LocalFileSystem#reportChecksumFailure not closing the checksum file handle before rename
Ivan Mitic created HADOOP-9490: -- Summary: LocalFileSystem#reportChecksumFailure not closing the checksum file handle before rename Key: HADOOP-9490 URL: https://issues.apache.org/jira/browse/HADOOP-9490 Project: Hadoop Common Issue Type: Bug Reporter: Ivan Mitic Assignee: Ivan Mitic Fix For: 3.0.0 LocalFileSystem#reportChecksumFailure is not closing the open stream on the checksum file before it moves it to the bad_files folder, what causes the operation to fail on Windows. TestLocalFileSystem fail on Windows because of this: {code} testReportChecksumFailure(org.apache.hadoop.fs.TestLocalFileSystem) Time elapsed: 31 sec <<< FAILURE! java.lang.AssertionError: at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.hadoop.fs.TestLocalFileSystem.testReportChecksumFailure(TestLocalFileSystem.java:335) {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9480) The windows installer should pick the config from src\packages\win\template\conf
Ivan Mitic created HADOOP-9480: -- Summary: The windows installer should pick the config from src\packages\win\template\conf Key: HADOOP-9480 URL: https://issues.apache.org/jira/browse/HADOOP-9480 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic We should pick the config files from the "src\packages\win\template\conf" location rather than the conf\ location in the Windows installer. Reported by [~mostafae]. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9472) Cleanup hadoop-config.cmd
Ivan Mitic created HADOOP-9472: -- Summary: Cleanup hadoop-config.cmd Key: HADOOP-9472 URL: https://issues.apache.org/jira/browse/HADOOP-9472 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Priority: Minor Some portions of hadoop-config.cmd script are unused and should be cleaned up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9463) branch-1-win fails to build with OpenJDK7
Ivan Mitic created HADOOP-9463: -- Summary: branch-1-win fails to build with OpenJDK7 Key: HADOOP-9463 URL: https://issues.apache.org/jira/browse/HADOOP-9463 Project: Hadoop Common Issue Type: Bug Reporter: Ivan Mitic Assignee: Ivan Mitic Fix For: 1-win Build fails with the following error: I:\svn\trunk_rebase\hadoop-common [branch-1-win]> ant clean winpkg Buildfile: I:\svn\trunk_rebase\hadoop-common\build.xml BUILD FAILED I:\svn\trunk_rebase\hadoop-common\build.xml:87: Unable to create javax script engine for javascript -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9452) Windows install scripts bugfixes
Ivan Mitic created HADOOP-9452: -- Summary: Windows install scripts bugfixes Key: HADOOP-9452 URL: https://issues.apache.org/jira/browse/HADOOP-9452 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic A few bugfixes we've done to install scripts on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9413) Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform
Ivan Mitic created HADOOP-9413: -- Summary: Introduce common utils for File#setReadable/Writable/Executable and File#canRead/Write/Execute that work cross-platform Key: HADOOP-9413 URL: https://issues.apache.org/jira/browse/HADOOP-9413 Project: Hadoop Common Issue Type: Bug Reporter: Ivan Mitic Assignee: Ivan Mitic Fix For: 3.0.0 So far, we've seen many unittest and product bugs in Hadoop on Windows because Java's APIs that manipulate with permissions do not work as expected. We've addressed many of these problems on one-by-one basis (by either changing code a bit or disabling the test). While debugging the remaining unittest failures we continue to run into the same patterns of problems, and instead of addressing them one-by-one, I propose that we expose a set of equivalent wrapper APIs that will work well for all platforms. Scanning thru the codebase, this will actually be a simple change as there are very few places that use File#setReadable/Writable/Executable and File#canRead/Write/Execute (5 files in Common, 9 files in HDFS). HADOOP-8973 contains additional context on the problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9388) TestFsShellCopy fails on Windows
Ivan Mitic created HADOOP-9388: -- Summary: TestFsShellCopy fails on Windows Key: HADOOP-9388 URL: https://issues.apache.org/jira/browse/HADOOP-9388 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Test fails on below test cases: {code} Tests run: 11, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 4.343 sec <<< FAILURE! testMoveDirFromLocal(org.apache.hadoop.fs.TestFsShellCopy) Time elapsed: 29 sec <<< FAILURE! java.lang.AssertionError: expected:<0> but was:<1> at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.failNotEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:126) at org.junit.Assert.assertEquals(Assert.java:470) at org.junit.Assert.assertEquals(Assert.java:454) at org.apache.hadoop.fs.TestFsShellCopy.testMoveDirFromLocal(TestFsShellCopy.java:392) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75) testMoveDirFromLocalDestExists(org.apache.hadoop.fs.TestFsShellCopy) Time elapsed: 25 sec <<< FAILURE! java.lang.AssertionError: expected:<0> but was:<1> at org.junit.Assert.fail(Assert.java:91) at org.junit.Assert.failNotEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:126) at org.junit.Assert.assertEquals(Assert.java:470) at org.junit.Assert.assertEquals(Assert.java:454) at org.apache.hadoop.fs.TestFsShellCopy.testMoveDirFromLocalDestExists(TestFsShellCopy.java:410) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:
[jira] [Created] (HADOOP-9387) TestDFVariations fails on Windows after the merge
Ivan Mitic created HADOOP-9387: -- Summary: TestDFVariations fails on Windows after the merge Key: HADOOP-9387 URL: https://issues.apache.org/jira/browse/HADOOP-9387 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Test fails with the following errors: (code} Running org.apache.hadoop.fs.TestDFVariations Tests run: 4, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.186 sec <<< FAILURE! testOSParsing(org.apache.hadoop.fs.TestDFVariations) Time elapsed: 109 sec <<< ERROR! java.io.IOException: Fewer lines of output than expected at org.apache.hadoop.fs.DF.parseOutput(DF.java:203) at org.apache.hadoop.fs.DF.getMount(DF.java:150) at org.apache.hadoop.fs.TestDFVariations.testOSParsing(TestDFVariations.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) testGetMountCurrentDirectory(org.apache.hadoop.fs.TestDFVariations) Time elapsed: 1 sec <<< ERROR! java.io.IOException: Fewer lines of output than expected at org.apache.hadoop.fs.DF.parseOutput(DF.java:203) at org.apache.hadoop.fs.DF.getMount(DF.java:150) at org.apache.hadoop.fs.TestDFVariations.testGetMountCurrentDirectory(TestDFVariations.java:139) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28) (code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
[ https://issues.apache.org/jira/browse/HADOOP-9099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic reopened HADOOP-9099: Reopening for trunk. > NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an > IP address > --- > > Key: HADOOP-9099 > URL: https://issues.apache.org/jira/browse/HADOOP-9099 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic >Priority: Minor > Fix For: 1.2.0, 1-win > > Attachments: HADOOP-9099.branch-1-win.patch, HADOOP-9099.trunk.patch > > > I just hit this failure. We should use some more unique string for > "UnknownHost": > Testcase: testNormalizeHostName took 0.007 sec > FAILED > expected:<[65.53.5.181]> but was:<[UnknownHost]> > junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but > was:<[UnknownHost]> > at > org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) > Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9376) TestProxyUserFromEnv fails on a Windows domain joined machine
Ivan Mitic created HADOOP-9376: -- Summary: TestProxyUserFromEnv fails on a Windows domain joined machine Key: HADOOP-9376 URL: https://issues.apache.org/jira/browse/HADOOP-9376 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic TestProxyUserFromEnv#testProxyUserFromEnvironment fails with the following error on my machine: org.junit.ComparisonFailure: expected:<[redmond\]ivanmi> but was:<[]ivanmi> at org.junit.Assert.assertEquals(Assert.java:123) at org.junit.Assert.assertEquals(Assert.java:145) at org.apache.hadoop.security.TestProxyUserFromEnv.testProxyUserFromEnvironment(TestProxyUserFromEnv.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9365) TestHAZKUtil fails on Windows
Ivan Mitic created HADOOP-9365: -- Summary: TestHAZKUtil fails on Windows Key: HADOOP-9365 URL: https://issues.apache.org/jira/browse/HADOOP-9365 Project: Hadoop Common Issue Type: Bug Affects Versions: trunk-win Reporter: Ivan Mitic Assignee: Ivan Mitic TestHAZKUtil#testConfIndirectionfails on the following validation: assertTrue(fnfe.getMessage().startsWith(BOGUS_FILE)); because the path separators do not match: Expected: \-this-does-not-exist Actual: /-this-does-not-exist -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9364) PathData#expandAsGlob does not return correct results for absolute paths on Windows
Ivan Mitic created HADOOP-9364: -- Summary: PathData#expandAsGlob does not return correct results for absolute paths on Windows Key: HADOOP-9364 URL: https://issues.apache.org/jira/browse/HADOOP-9364 Project: Hadoop Common Issue Type: Bug Affects Versions: trunk-win Reporter: Ivan Mitic Assignee: Ivan Mitic This causes {{FsShell ls}} not to work properly for absolute paths. For example: {code} -fs hdfs://127.0.0.1:58559 -ls -R /dir0 {code} returns {code} drwxr-xr-x - ivanmi supergroup 0 2013-03-05 11:15 ../../dir0/dir1 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9313) Remove spurious mkdir from hadoop-config.cmd
Ivan Mitic created HADOOP-9313: -- Summary: Remove spurious mkdir from hadoop-config.cmd Key: HADOOP-9313 URL: https://issues.apache.org/jira/browse/HADOOP-9313 Project: Hadoop Common Issue Type: Bug Reporter: Ivan Mitic The following mkdir seems to have been accidentally added to Windows cmd script and should be removed: {code} mkdir c:\tmp\dir1 {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9250) Windows installer bugfixes
Ivan Mitic created HADOOP-9250: -- Summary: Windows installer bugfixes Key: HADOOP-9250 URL: https://issues.apache.org/jira/browse/HADOOP-9250 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic A few bugfixes and improvements we made to the install scripts on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8516) fsck command does not work when executed on Windows Hadoop installation
[ https://issues.apache.org/jira/browse/HADOOP-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-8516. Resolution: Cannot Reproduce This is fixed in branch-1-win along the way. Resolving the issue as cannot reproduce to avoid accumulating Jiras. > fsck command does not work when executed on Windows Hadoop installation > --- > > Key: HADOOP-8516 > URL: https://issues.apache.org/jira/browse/HADOOP-8516 > Project: Hadoop Common > Issue Type: Bug >Reporter: Trupti Dhavle > > I tried to run following command on Windows Hadoop installation > hadoop fsck /tmp > THis command was run as Administrator. > The command fails with following error- > 12/06/20 00:24:55 ERROR security.UserGroupInformation: > PriviledgedActionExceptio > n as:Administrator cause:java.net.ConnectException: Connection refused: > connect > Exception in thread "main" java.net.ConnectException: Connection refused: > connec > t > at java.net.PlainSocketImpl.socketConnect(Native Method) > at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) > at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:211) > at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) > at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) > at java.net.Socket.connect(Socket.java:529) > at java.net.Socket.connect(Socket.java:478) > at sun.net.NetworkClient.doConnect(NetworkClient.java:163) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) > at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) > at sun.net.www.http.HttpClient.(HttpClient.java:233) > at sun.net.www.http.HttpClient.New(HttpClient.java:306) > at sun.net.www.http.HttpClient.New(HttpClient.java:323) > at > sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLC > onnection.java:970) > at > sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConne > ction.java:911) > at > sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection > .java:836) > at > sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLCon > nection.java:1172) > at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:141) > at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:110) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma > tion.java:1103) > at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:110) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:182) > /tmp is owned by Administrator > hadoop fs -ls / > Found 3 items > drwxr-xr-x - Administrator supergroup 0 2012-06-08 15:08 > /benchmarks > drwxrwxrwx - Administrator supergroup 0 2012-06-11 23:00 /tmp > drwxr-xr-x - Administrator supergroup 0 2012-06-19 17:01 /user -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8517) --config option does not work with Hadoop installation on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-8517. Resolution: Cannot Reproduce This is fixed in branch-1-win along the way. Resolving the issue as cannot reproduce. > --config option does not work with Hadoop installation on Windows > - > > Key: HADOOP-8517 > URL: https://issues.apache.org/jira/browse/HADOOP-8517 > Project: Hadoop Common > Issue Type: Bug >Reporter: Trupti Dhavle > > I ran following command > hadoop --config c:\\hadoop\conf fs -ls / > I get following error for --config option > Unrecognized option: --config > Could not create the Java virtual machine. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9074) Hadoop install scripts for Windows
[ https://issues.apache.org/jira/browse/HADOOP-9074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-9074. Resolution: Fixed This is committed to branch-1-win, resolving. > Hadoop install scripts for Windows > -- > > Key: HADOOP-9074 > URL: https://issues.apache.org/jira/browse/HADOOP-9074 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-9074.branch-1-win.installer.patch > > > Tracking Jira to post Hadoop install scripts for Windows. Scripts will > provide means for Windows users/developers to install/uninstall Hadoop on a > single-node. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9177) Address issues that came out from running static code analysis on winutils
Ivan Mitic created HADOOP-9177: -- Summary: Address issues that came out from running static code analysis on winutils Key: HADOOP-9177 URL: https://issues.apache.org/jira/browse/HADOOP-9177 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9167) TestBalancerWithNodeGroup fails on Windows
Ivan Mitic created HADOOP-9167: -- Summary: TestBalancerWithNodeGroup fails on Windows Key: HADOOP-9167 URL: https://issues.apache.org/jira/browse/HADOOP-9167 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Test added with the support for "NetworkTopology with NodeGroup" fails on Windows. Relevant section from the test log: 2012-12-24 23:51:01,835 WARN datanode.DataNode (DataNode.java:makeInstance(1582)) - Invalid directory in dfs.data.dir: Incorrect permission for I:/git/project/hadoop-monarch/build/test/data/dfs/data/data1, expected: rwxr-xr-x, while actual: rwx-- 2012-12-24 23:51:01,911 WARN datanode.DataNode (DataNode.java:makeInstance(1582)) - Invalid directory in dfs.data.dir: Incorrect permission for I:/git/project/hadoop-monarch/build/test/data/dfs/data/data2, expected: rwxr-xr-x, while actual: rwx-- 2012-12-24 23:51:01,911 ERROR datanode.DataNode (DataNode.java:makeInstance(1588)) - All directories in dfs.data.dir are invalid. Default permissions on Windows are 700 while datanode expects 755. We already fixed this in MiniDFSCluster in branch-1-win. A similar fix is needed in MiniDFSClusterWithNodeGroup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9099) NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address
Ivan Mitic created HADOOP-9099: -- Summary: NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address Key: HADOOP-9099 URL: https://issues.apache.org/jira/browse/HADOOP-9099 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic I just hit this failure. We should use some more unique string for "UnknownHost": Testcase: testNormalizeHostName took 0.007 sec FAILED expected:<[65.53.5.181]> but was:<[UnknownHost]> junit.framework.AssertionFailedError: expected:<[65.53.5.181]> but was:<[UnknownHost]> at org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347) Will post a patch in a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9096) Improve performance of Windows install scripts
Ivan Mitic created HADOOP-9096: -- Summary: Improve performance of Windows install scripts Key: HADOOP-9096 URL: https://issues.apache.org/jira/browse/HADOOP-9096 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Things to improve for better performance: The whole install is taking 4-5 mins on a single box because of. 1) Inclusion of src & other temp folders (IVY etc..) in the winpkg so it is taking longer time to decompress the zip folder 2) Nested zip files in the winpkg. If we remove the nested zips then after manual unpack everything will be a xcopy install and reduces install time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9074) Hadoop install scripts for Windows
Ivan Mitic created HADOOP-9074: -- Summary: Hadoop install scripts for Windows Key: HADOOP-9074 URL: https://issues.apache.org/jira/browse/HADOOP-9074 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Tracking Jira to post Hadoop install scripts for Windows. Scripts will provide means for Windows users/developers to install/uninstall Hadoop on a single-node. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9061) Java6+Windows does not work well with symlinks
Ivan Mitic created HADOOP-9061: -- Summary: Java6+Windows does not work well with symlinks Key: HADOOP-9061 URL: https://issues.apache.org/jira/browse/HADOOP-9061 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic We’ve seen multiple problems with file operations on symbolic links on Java6 on Windows. Specifically: - File#length returns zero on symbolic links - File#renameTo renames the target, not the symlink Problematic scenarios are mainly related to symlinks created for dist cache. Java7 does not have above problems. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9057) TestMetricsSystemImpl.testInitFirst fails intermittently
Ivan Mitic created HADOOP-9057: -- Summary: TestMetricsSystemImpl.testInitFirst fails intermittently Key: HADOOP-9057 URL: https://issues.apache.org/jira/browse/HADOOP-9057 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Error Message Wanted but not invoked: metricsSink.putMetrics(); -> at org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirst(TestMetricsSystemImpl.java:80) Actually, there were zero interactions with this mock. Stacktrace Wanted but not invoked: metricsSink.putMetrics(); -> at org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirst(TestMetricsSystemImpl.java:80) Actually, there were zero interactions with this mock. at org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testInitFirst(TestMetricsSystemImpl.java:80) at org.mockito.internal.runners.JUnit45AndHigherRunnerImpl.run(JUnit45AndHigherRunnerImpl.java:37) at org.mockito.runners.MockitoJUnitRunner.run(MockitoJUnitRunner.java:62) Standard Output 2012-10-04 11:43:55,641 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(99)) - loaded properties from hadoop-metrics2-test.properties -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9036) TestSinkQueue.testConcurrentConsumers fails
Ivan Mitic created HADOOP-9036: -- Summary: TestSinkQueue.testConcurrentConsumers fails Key: HADOOP-9036 URL: https://issues.apache.org/jira/browse/HADOOP-9036 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers Error Message should've thrown Stacktrace junit.framework.AssertionFailedError: should've thrown at org.apache.hadoop.metrics2.impl.TestSinkQueue.shouldThrowCME(TestSinkQueue.java:229) at org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers(TestSinkQueue.java:195) Standard Output 2012-10-03 16:51:31,694 INFO impl.TestSinkQueue (TestSinkQueue.java:consume(243)) - sleeping -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9027) Build fails on Windows without sh/sed/echo in the path
Ivan Mitic created HADOOP-9027: -- Summary: Build fails on Windows without sh/sed/echo in the path Key: HADOOP-9027 URL: https://issues.apache.org/jira/browse/HADOOP-9027 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Branch-1-win still has a dependency on a few unix tools in compile time. Tracking Jira to remove this dependency. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9026) Hadoop.cmd fails to initialize if user's %path% variable has parenthesis
Ivan Mitic created HADOOP-9026: -- Summary: Hadoop.cmd fails to initialize if user's %path% variable has parenthesis Key: HADOOP-9026 URL: https://issues.apache.org/jira/browse/HADOOP-9026 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Hadoop.cmd fails to initialize if user's %path% variable has parenthesis. This happens in "updatepath" module while tring the add Hadoop_bin_path to %path% Repro: 1. Create a folder C:\random() 2. Add this path to %path% 3. Start Hadoop command line Error: ""; unexpected at this time Reported by Ramya Nimmagadda. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-8972) Move winutils tests from bat to Java
[ https://issues.apache.org/jira/browse/HADOOP-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic reopened HADOOP-8972: I missed to delete some files with the original patch, reactivating. > Move winutils tests from bat to Java > > > Key: HADOOP-8972 > URL: https://issues.apache.org/jira/browse/HADOOP-8972 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1-win >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Fix For: 1-win > > Attachments: HADOOP-8972.branch-1-win.winutilstests.patch > > > This Jira tracks the work needed to move existing winutils tests from bat > files to Java. We decided to go with Java for the following reasons: > 1. It showed to be quite hard to modify bat scripts and add new test cases > (people are generally not comfortable writing bat scripts) > 2. Debugging a test case failure in bat script is not trivial > 3. It turned that we are not running the test scripts frequently > Cons: > One now needs JDK and Hadoop jar to compile and run winutils tests. However, > in the context of Hadoop this is not an issue so we decided to go with > something we all feel more comfortable with. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9008) Building hadoop tarball fails on Windows
Ivan Mitic created HADOOP-9008: -- Summary: Building hadoop tarball fails on Windows Key: HADOOP-9008 URL: https://issues.apache.org/jira/browse/HADOOP-9008 Project: Hadoop Common Issue Type: Bug Affects Versions: trunk-win Reporter: Ivan Mitic Trying to build Hadoop trunk tarball via {{mvn package -Pdist -DskipTests -Dtar}} fails on Windows. Build system generates sh scripts that execute build tasks what does not work on Windows without Cygwin. It might make sense to apply the same pattern as in HADOOP-8924, and use python instead of sh. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9005) Merge hadoop cmd line scripts from branch-1-win
Ivan Mitic created HADOOP-9005: -- Summary: Merge hadoop cmd line scripts from branch-1-win Key: HADOOP-9005 URL: https://issues.apache.org/jira/browse/HADOOP-9005 Project: Hadoop Common Issue Type: Bug Affects Versions: trunk-win Reporter: Ivan Mitic Assignee: Ivan Mitic Tracking Jira for merging hadoop cmd line scripts from branch-1-win to trunk. Scripts also have to be updated to reflect their unix equivalents. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8972) Move winutils tests from bat to Java
Ivan Mitic created HADOOP-8972: -- Summary: Move winutils tests from bat to Java Key: HADOOP-8972 URL: https://issues.apache.org/jira/browse/HADOOP-8972 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic This Jira tracks the work needed to move existing winutils tests from bat files to Java. We decided to go with Java for the following reasons: 1. It showed to be quite hard to modify bat scripts and add new test cases (people are generally not comfortable writing bat scripts) 2. Debugging a test case failure in bat script is not trivial 3. It turned that we are not running the test scripts frequently Cons: One now needs JDK and Hadoop jar to compile and run winutils tests. However, in the context of Hadoop this is not an issue so we decided to go with something we all feel more comfortable with. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8540) Compression with non-default codec's fail on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-8540. Resolution: Fixed Support for zlib compression library was added with HADOOP-8564. To enable zlib compression codecs on Windows: - Download zlib sources and set ZLIB_HOME in the environment before building Hadoop on Windows (this step is needed for build to generate the corresponding stubs in hadoop.dll) - Install zlib1.dll to system32 (with HADOOP-8907 you can also install it next to hadoop.dll) > Compression with non-default codec's fail on Windows > > > Key: HADOOP-8540 > URL: https://issues.apache.org/jira/browse/HADOOP-8540 > Project: Hadoop Common > Issue Type: Bug >Reporter: Trupti Dhavle > > I was trying to run the compression test case with GZipCodec. However the > test fails with NPE. It is running fine with DefaultCodec. > With text input > c:\Workspace\winhadoopqe>c:\hdp\branch-1-win\bin\hadoop jar > c:\hdp\branch-1-win\ > build\hadoop-examples-1.1.0-SNAPSHOT.jar sort > "-Dmapred.compress.map.output=true > " > "-Dmapred.map.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec > " "-Dmapred.output.compress=true" "-Dmapred.output.compression.type=NONE" > "-Dm > apred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec" > -outKey > org.apache.hadoop.io.Text -outValue org.apache.hadoop.io.Text > Compression/texti > nput Compression/textoutput-123 > Running on 1 nodes to sort from > hdfs://localhost:8020/user/Administrator/Compres > sion/textinput into > hdfs://localhost:8020/user/Administrator/Compression/textout > put-123 with 1 reduces. > Job started: Thu Jun 28 10:02:18 PDT 2012 > 12/06/28 10:02:18 INFO mapred.FileInputFormat: Total input paths to process : > 1 > 12/06/28 10:02:18 INFO mapred.JobClient: Running job: job_201206271409_0045 > 12/06/28 10:02:19 INFO mapred.JobClient: map 0% reduce 0% > 12/06/28 10:02:36 INFO mapred.JobClient: Task Id : > attempt_201206271409_0045_m_0 > 0_0, Status : FAILED > java.lang.NullPointerException > at org.apache.hadoop.mapred.IFile$Writer.(IFile.java:102) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask > .java:1407) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1 > 298) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) > at org.apache.hadoop.mapred.Child$4.run(Child.java:271) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma > tion.java:1103) > at org.apache.hadoop.mapred.Child.main(Child.java:265) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (HADOOP-6496) HttpServer sends wrong content-type for CSS files (and others)
[ https://issues.apache.org/jira/browse/HADOOP-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic reopened HADOOP-6496: Reopening for branch 1.1 backport. > HttpServer sends wrong content-type for CSS files (and others) > -- > > Key: HADOOP-6496 > URL: https://issues.apache.org/jira/browse/HADOOP-6496 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 0.21.0, 0.22.0 >Reporter: Lars Francke >Assignee: Ivan Mitic >Priority: Minor > Fix For: 0.22.0 > > Attachments: HADOOP-6496.branch-1.1.backport.2.patch, > HADOOP-6496.branch-1.1.backport.patch, hadoop-6496.txt, hadoop-6496.txt > > > CSS files are send as text/html causing problems if the HTML page is rendered > in standards mode. The HDFS interface for example still works because it is > rendered in quirks mode, the HBase interface doesn't work because it is > rendered in standards mode. See HBASE-2110 for more details. > I've had a quick look at HttpServer but I'm too unfamiliar with it to see the > problem. I think this started happening with HADOOP-6441 which would lead me > to believe that the filter is called for every request and not only *.jsp and > *.html. I'd consider this a bug but I don't know enough about this to provide > a fix. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8907) Provide means to look for zlib1.dll next to haodop.dll on Windows
Ivan Mitic created HADOOP-8907: -- Summary: Provide means to look for zlib1.dll next to haodop.dll on Windows Key: HADOOP-8907 URL: https://issues.apache.org/jira/browse/HADOOP-8907 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic This change is dependent on HADOOP-8564. Instead of asking end users to install zlib1.dll to system32 on Windows, it would be more convenient to just copy the dll next to hadoop.dll. More context and patch coming after HADOOP-8564. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8872) FileSystem#length returns zero for symlinks on windows+java6
Ivan Mitic created HADOOP-8872: -- Summary: FileSystem#length returns zero for symlinks on windows+java6 Key: HADOOP-8872 URL: https://issues.apache.org/jira/browse/HADOOP-8872 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic RawLocalFileSystem does not work well with symbolic links on Windows. Specifically, calling FileSystem#lengh on the path that is a symlink will return zero. This causes problems in some objects that use LocalFileSystem to access local files. One example is a SequenceFile. The issue is caused by Java6 File#length returning zero for symbolic links on Windows. On Java7, we will no longer have this problem. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8869) Links at the bottom of the jobdetails page do not render correctly in IE9
Ivan Mitic created HADOOP-8869: -- Summary: Links at the bottom of the jobdetails page do not render correctly in IE9 Key: HADOOP-8869 URL: https://issues.apache.org/jira/browse/HADOOP-8869 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic Attachments: IE9.png, OtherBrowsers.png See attached screen shoots IE9.png vs OtherBrowsers.png -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8868) FileUtil#symlink and FileUtil#chmod should normalize the path before calling into shell APIs
Ivan Mitic created HADOOP-8868: -- Summary: FileUtil#symlink and FileUtil#chmod should normalize the path before calling into shell APIs Key: HADOOP-8868 URL: https://issues.apache.org/jira/browse/HADOOP-8868 Project: Hadoop Common Issue Type: Bug Affects Versions: 1-win Reporter: Ivan Mitic Assignee: Ivan Mitic We have seen cases where paths passed in from FileUtil#symlink or FileUtil#chmod to Shell APIs can contain both forward and backward slashes on Windows. This causes problems, since some Windows APIs do not work well with mixed slashes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8487) Many HDFS tests use a test path intended for local file system tests
[ https://issues.apache.org/jira/browse/HADOOP-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-8487. Resolution: Fixed Resolving as this is committed to branch-1-win. This way, the state of active Jiras is up to date under HADOOP-8645. Will reference this Jira once we get to fixing this in trunk. > Many HDFS tests use a test path intended for local file system tests > > > Key: HADOOP-8487 > URL: https://issues.apache.org/jira/browse/HADOOP-8487 > Project: Hadoop Common > Issue Type: Bug > Components: test >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-8487-branch-1-win(2).patch, > HADOOP-8487-branch-1-win(3).patch, HADOOP-8487-branch-1-win(3).update.patch, > HADOOP-8487-branch-1-win.alternate.patch, HADOOP-8487-branch-1-win.patch > > > Many tests use a test path intended for local tests setup by build > environment. In some cases the tests fails on platforms such as windows > because the path contains a c: -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8734) LocalJobRunner does not support private distributed cache
Ivan Mitic created HADOOP-8734: -- Summary: LocalJobRunner does not support private distributed cache Key: HADOOP-8734 URL: https://issues.apache.org/jira/browse/HADOOP-8734 Project: Hadoop Common Issue Type: Bug Components: filecache Reporter: Ivan Mitic Assignee: Ivan Mitic It seems that LocalJobRunner does not support private distributed cache. The issue is more visible on Windows as all DC files are private by default (see HADOOP-8731). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows
Ivan Mitic created HADOOP-8733: -- Summary: TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows Key: HADOOP-8733 URL: https://issues.apache.org/jira/browse/HADOOP-8733 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Ivan Mitic Assignee: Ivan Mitic Jira tracking test failures related to test .sh script dependencies. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8732) Address intermittent test failures on Windows
Ivan Mitic created HADOOP-8732: -- Summary: Address intermittent test failures on Windows Key: HADOOP-8732 URL: https://issues.apache.org/jira/browse/HADOOP-8732 Project: Hadoop Common Issue Type: Bug Components: util Reporter: Ivan Mitic Assignee: Ivan Mitic There are a few tests that fail intermittently on Windows with a timeout error. This means that the test was actually killed from the outside, and it would continue to run otherwise. The following are examples of such tests (there might be others): - TestJobInProgress (this issue reproes pretty consistently in Eclipse on this one) - TestControlledMapReduceJob - TestServiceLevelAuthorization -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8731) Public distributed cache support for Windows
Ivan Mitic created HADOOP-8731: -- Summary: Public distributed cache support for Windows Key: HADOOP-8731 URL: https://issues.apache.org/jira/browse/HADOOP-8731 Project: Hadoop Common Issue Type: Bug Components: filecache Reporter: Ivan Mitic Assignee: Ivan Mitic A distributed cache file is considered public (sharable between MR jobs) if OTHER has read permissions on the file and +x permissions all the way up in the folder hierarchy. By default, Windows permissions are mapped to "700" all the way up to the drive letter, and it is unreasonable to ask users to change the permission on the whole drive to make the file public. IOW, it is hardly possible to have public distributed cache on Windows. To enable the scenario and make it more "Windows friendly", the criteria on when a file is considered public should be relaxed. One proposal is to check whether the user has given EVERYONE group permission on the file only (and discard the +x check on parent folders). Security considerations for the proposal: Default permissions on Unix platforms are usually "775" or "755" meaning that OTHER users can read and list folders by default. What this also means is that Hadoop users have to explicitly make the files private in order to make them private in the cluster (please correct me if this is not the case in real life!). On Windows, default permissions are "700". This means that by default all files are private. In the new model, if users want to make them public, they have to explicitly add EVERYONE group permissions on the file. TestTrackerDistributedCacheManager fails because of this issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8534) TestQueueManagerForJobKillAndJobPriority and TestQueueManagerForJobKillAndNonDefaultQueue fail on Windows
Ivan Mitic created HADOOP-8534: -- Summary: TestQueueManagerForJobKillAndJobPriority and TestQueueManagerForJobKillAndNonDefaultQueue fail on Windows Key: HADOOP-8534 URL: https://issues.apache.org/jira/browse/HADOOP-8534 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 1.0.0 Reporter: Ivan Mitic Java xml parser keeps file locked after SAXException, causing the following tests to fail: - TestQueueManagerForJobKillAndJobPriority - TestQueueManagerForJobKillAndNonDefaultQueue {{TestQueueManagerForJobKillAndJobPriority#testQueueAclRefreshWithInvalidConfFile()}} is creating a temp config file with incorrect syntax. Later, the test tries to delete/cleanup this file and this operation fails on Windows (as the file is still open). From this point on, all subsequent tests fail because they try to use the incorrect config file. Forum references on the problem and the fix: http://www.linuxquestions.org/questions/programming-9/java-xml-parser-keeps-file-locked-after-saxexception-768613/ https://forums.oracle.com/forums/thread.jspa?threadID=2046505&start=0&tstart=0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8493) Extend Path with Path#toFile() and Path(File) to better support path cross-platform differences
Ivan Mitic created HADOOP-8493: -- Summary: Extend Path with Path#toFile() and Path(File) to better support path cross-platform differences Key: HADOOP-8493 URL: https://issues.apache.org/jira/browse/HADOOP-8493 Project: Hadoop Common Issue Type: Bug Components: fs Reporter: Ivan Mitic Assignee: Ivan Mitic Priority: Minor In Hadoop, Path object is used to represent both local and remote (for example DFS) paths. In some scenarios, Path's path is passed on directly to the operating system shell. As seen in [MAPREDUCE-4321|https://issues.apache.org/jira/browse/MAPREDUCE-4321], the path returned from the Path object is not necessary a valid shell path. By providing {{Path#toFile()}} and {{Path(File)}}, we will provide means by which people can do the right thing if they follow certain rules. However, as noted in comments for MAPREDUCE-4321, this does not provide any guaranties that someone won't unknowingly do it wrong in the future, but it is still a step forward. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8487) Address test failures related to Windows paths not being valid DFS paths
Ivan Mitic created HADOOP-8487: -- Summary: Address test failures related to Windows paths not being valid DFS paths Key: HADOOP-8487 URL: https://issues.apache.org/jira/browse/HADOOP-8487 Project: Hadoop Common Issue Type: Bug Components: test Reporter: Ivan Mitic There is a number of tests that fail on Windows because Hadoop's distributed file system does not allow colon character in DFS paths. Specifically, passing in the following path to DFS: {code}/c:/some/path{code} would fail DFSUtil#isValidName check and cause the current operation to fail. Any test that is using local absolute path in the context of the DFS will fail because of this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
Ivan Mitic created HADOOP-8440: -- Summary: HarFileSystem.decodeHarURI fails for URIs whose host contains numbers Key: HADOOP-8440 URL: https://issues.apache.org/jira/browse/HADOOP-8440 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Priority: Minor For example, HarFileSystem.decodeHarURI will fail for the following URI: har://hdfs-127.0.0.1:51040/user -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-8412) TestModTime, TestDelegationToken and TestAuthenticationToken fail intermittently on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic resolved HADOOP-8412. Resolution: Fixed > TestModTime, TestDelegationToken and TestAuthenticationToken fail > intermittently on Windows > --- > > Key: HADOOP-8412 > URL: https://issues.apache.org/jira/browse/HADOOP-8412 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Ivan Mitic >Assignee: Ivan Mitic > Attachments: HADOOP-8412-branch-1-win.patch > > > Jira tracking failures from the summary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8414) Address problems related to localhost resolving to 127.0.0.1 on Windows
Ivan Mitic created HADOOP-8414: -- Summary: Address problems related to localhost resolving to 127.0.0.1 on Windows Key: HADOOP-8414 URL: https://issues.apache.org/jira/browse/HADOOP-8414 Project: Hadoop Common Issue Type: Bug Components: fs, test Affects Versions: 1.0.0 Reporter: Ivan Mitic Localhost resolves to 127.0.0.1 on Windows and that causes the following tests to fail: - TestHarFileSystem - TestCLI - TestSaslRPC This Jira tracks fixing these tests and other possible places that have similar issue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8412) TestModTime, TestDelegationToken and TestAuthenticationToken fail intermittently on Windows
Ivan Mitic created HADOOP-8412: -- Summary: TestModTime, TestDelegationToken and TestAuthenticationToken fail intermittently on Windows Key: HADOOP-8412 URL: https://issues.apache.org/jira/browse/HADOOP-8412 Project: Hadoop Common Issue Type: Bug Components: record Affects Versions: 1.0.0 Reporter: Ivan Mitic Jira tracking failures from the summary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8411) TestStorageDirecotyFailure, TestTaskLogsTruncater, TestWebHdfsUrl and TestSecurityUtil fail on Windows
Ivan Mitic created HADOOP-8411: -- Summary: TestStorageDirecotyFailure, TestTaskLogsTruncater, TestWebHdfsUrl and TestSecurityUtil fail on Windows Key: HADOOP-8411 URL: https://issues.apache.org/jira/browse/HADOOP-8411 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 1.1.0 Reporter: Ivan Mitic Jira tracking failures from the summary. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8409) Address Hadoop path related issues on Windows
Ivan Mitic created HADOOP-8409: -- Summary: Address Hadoop path related issues on Windows Key: HADOOP-8409 URL: https://issues.apache.org/jira/browse/HADOOP-8409 Project: Hadoop Common Issue Type: Bug Components: fs, test, util Affects Versions: 1.0.0 Reporter: Ivan Mitic There are multiple places in prod and test code where Windows paths are not handled properly. From a high level this could be summarized with: 1. Windows paths are not necessarily valid DFS paths (while Unix paths are) 2. Windows paths are not necessarily valid URIs (while Unix paths are) #1 causes a number of tests to fail because they implicitly assume that local paths are valid DFS paths (by extracting the DFS test path from for example "test.build.data" property) #2 causes issues when URIs are directly created on path strings passed in by the user -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira