[jira] [Resolved] (HADOOP-10869) JavaKeyStoreProvider backing jceks file may get corrupted

2014-07-21 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HADOOP-10869.
-

Resolution: Duplicate

> JavaKeyStoreProvider backing jceks file may get corrupted
> -
>
> Key: HADOOP-10869
> URL: https://issues.apache.org/jira/browse/HADOOP-10869
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Arun Suresh
>
> Currently, flush writes to the same file jceks file, if there is a failure 
> during a write, the jceks file will be rendered unusable losing access to all 
> keys stored in it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Why Hadoop-trunk-commit always fails?

2014-07-21 Thread Konstantin Boudnik
If we use maven jar plugin or maven archivers to create any of these then 
adding 
  false
should solve the issue.

Cos

On Mon, Jul 21, 2014 at 01:55PM, Andrew Wang wrote:
> I dug around a bit with Tucu, and I think it's essentially the dependency
> analyzer screwing up with snapshot artifacts. I found a different error for
> HttpFS that looks similar:
> 
> 
> [WARNING]
> Dependency convergence error for
> org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-hdfs:3.0.0-20140718.221409-4777
> 
> [WARNING] Rule 0:
> org.apache.maven.plugins.enforcer.DependencyConvergence failed with
> message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for
> org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-hdfs:3.0.0-20140718.221409-4777
> 
> 
> You can see that it sees 3.0.0-SNAPSHOT being used for one, and
> 3.0.0-20140718.221409-4777 for the other (which causes the error). The same
> thing happened in the stuff Ted posted, but for the KMS. Somehow the local
> maven repo is getting screwed up non-deterministically.
> 
> Tucu recommends we remove this check from the post-commit build, and
> instead make it part of the maven job used to build releases. At release
> time, there shouldn't be any ambiguity about version numbers.
> 
> Any brave volunteers out there? I am not a maven maven, but am happy to
> review pom.xml changes that do this, and I'll make sure the maven job used
> to build releases still does the dep check.
> 
> Best,
> Andrew
> 
> 
> 
> 
> On Thu, Jul 17, 2014 at 9:50 PM, Ted Yu  wrote:
> 
> > Here is the warning from enforcer:
> >
> > [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence
> > failed with message:
> > Failed while enforcing releasability the error(s) are [
> > Dependency convergence error for
> > org.apache.hadoop:hadoop-auth:3.0.0-20140718.043141-4847 paths to
> > dependency are:
> > +-org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT
> >   +-org.apache.hadoop:hadoop-auth:3.0.0-20140718.043141-4847
> > and
> > +-org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT
> >   +-org.apache.hadoop:hadoop-common:3.0.0-20140718.043201-4831
> > +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> > and
> > +-org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT
> >   +-org.apache.hadoop:hadoop-common:3.0.0-20140718.043201-4831
> > +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> > ]
> >
> > FYI
> >
> >
> > On Thu, Jul 17, 2014 at 9:38 PM, Vinayakumar B 
> > wrote:
> >
> > > Hi,
> > > Hadoop-trunk-commit build always fails with message similar to below.
> > > Anybody knows about this?
> > >
> > > [ERROR] Failed to execute goal
> > > org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce
> > > (depcheck) on project hadoop-yarn-server-tests: Some Enforcer rules
> > > have failed. Look above for specific messages explaining why the rule
> > > failed. -> [Help 1]
> > >
> > >
> > >
> > > Regards,
> > > Vinay
> > >
> >


[jira] [Resolved] (HADOOP-10871) incorrect prototype in OpensslSecureRandom.c

2014-07-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10871.
---

   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   fs-encryption (HADOOP-10150 and HDFS-6134)

> incorrect prototype in OpensslSecureRandom.c
> 
>
> Key: HADOOP-10871
> URL: https://issues.apache.org/jira/browse/HADOOP-10871
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)
>
> Attachments: HADOOP-10871-fs-enc.001.patch
>
>
> There is an incorrect prototype in OpensslSecureRandom.c.
> {code}
> /home/cmccabe/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSec
> ureRandom.c:160:3: warning: call to function ‘openssl_rand_init’ without a 
> real prototype [-Wunprototyped-calls]
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10870) Failed to load OpenSSL cipher error logs on systems with old openssl versions

2014-07-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10870.
---

  Resolution: Fixed
   Fix Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)
Target Version/s: fs-encryption (HADOOP-10150 and HDFS-6134)

> Failed to load OpenSSL cipher error logs on systems with old openssl versions
> -
>
> Key: HADOOP-10870
> URL: https://issues.apache.org/jira/browse/HADOOP-10870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: fs-encryption (HADOOP-10150 and HDFS-6134)
>
> Attachments: HADOOP-10870-fs-enc.001.patch
>
>
> I built Hadoop from fs-encryption branch and deployed Hadoop (without 
> enabling any security confs) on a Centos 6.4 VM with an old version of 
> openssl.
> {code}
> [root@schu-enc hadoop-common]# rpm -qa | grep openssl
> openssl-1.0.0-27.el6_4.2.x86_64
> openssl-devel-1.0.0-27.el6_4.2.x86_64
> {code}
> When I try to do a simple "hadoop fs -ls", I get
> {code}
> [hdfs@schu-enc hadoop-common]$ hadoop fs -ls
> 2014-07-21 19:35:14,486 ERROR [main] crypto.OpensslCipher 
> (OpensslCipher.java:(87)) - Failed to load OpenSSL Cipher.
> java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version 
> of Openssl new enough?
>   at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
>   at 
> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:84)
>   at 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.(OpensslAesCtrCryptoCodec.java:50)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
>   at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:55)
>   at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:591)
>   at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:561)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2590)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2624)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2606)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
>   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:228)
>   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:211)
>   at 
> org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> 2014-07-21 19:35:14,495 WARN  [main] crypto.CryptoCodec 
> (CryptoCodec.java:getInstance(66)) - Crypto codec 
> org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.
> {code}
> It would be an improvment to clean up/shorten this error log.
> hadoop checknative shows the error as well
> {code}
> [hdfs@schu-enc ~]$ hadoop checknative
> 2014-07-21 19:38:38,376 INFO  [main] bzip2.Bzip2Factory 
> (Bzip2Factory.java:isNativeBzip2Loaded(70)) - Successfully loaded & 
> initialized native-bzip2 library system-native
> 2014-07-21 19:38:38,395 INFO  [main] zlib.ZlibFactory 
> (ZlibFactory.java:(49)) - Successfully loaded & initialized 
> native-zlib library
> 2014-07-21 19:38:38,411 ERROR [main] crypto.OpensslCipher 
> (OpensslCipher.java:(87)) - Failed to load OpenSSL Cipher.
> java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version 
> of Openssl new enough?
>   at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
>   at 
> org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:84)
>   at 
> 

[jira] [Resolved] (HADOOP-10510) TestSymlinkLocalFSFileContext tests are failing

2014-07-21 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HADOOP-10510.


Resolution: Duplicate

I'm marking this as a duplicate per [~andrew.wang]'s comments in HADOOP-10866. 
Thanks to you all!


> TestSymlinkLocalFSFileContext tests are failing
> ---
>
> Key: HADOOP-10510
> URL: https://issues.apache.org/jira/browse/HADOOP-10510
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.4.0
> Environment: Linux
>Reporter: Daniel Darabos
> Attachments: TestSymlinkLocalFSFileContext-output.txt, 
> TestSymlinkLocalFSFileContext.txt
>
>
> Test results:
> https://gist.github.com/oza/9965197
> This was mentioned on hadoop-common-dev:
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3CCAAD07OKRSmx9VSjmfk1YxyBmnFM8mwZSp%3DizP8yKKwoXYvn3Qg%40mail.gmail.com%3E
> Can you suggest a workaround in the meantime? I'd like to send a pull request 
> for an unrelated bug, but these failures mean I cannot build hadoop-common to 
> test my fix. Thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10871) incorrect prototype in OpensslSecureRandom.c

2014-07-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10871:
-

 Summary: incorrect prototype in OpensslSecureRandom.c
 Key: HADOOP-10871
 URL: https://issues.apache.org/jira/browse/HADOOP-10871
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: util
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-10871-fs-enc.001.patch

There is an incorrect prototype in OpensslSecureRandom.c.

{code}
/home/cmccabe/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/random/OpensslSec
ureRandom.c:160:3: warning: call to function ‘openssl_rand_init’ without a real 
prototype [-Wunprototyped-calls]
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10870) Failed to load OpenSSL cipher error logs on systems with old openssl versions

2014-07-21 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-10870:


 Summary: Failed to load OpenSSL cipher error logs on systems with 
old openssl versions
 Key: HADOOP-10870
 URL: https://issues.apache.org/jira/browse/HADOOP-10870
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Stephen Chu


I built Hadoop from fs-encryption branch and deployed Hadoop (without enabling 
any security confs) on a Centos 6.4 VM with an old version of openssl.

{code}
[root@schu-enc hadoop-common]# rpm -qa | grep openssl
openssl-1.0.0-27.el6_4.2.x86_64
openssl-devel-1.0.0-27.el6_4.2.x86_64
{code}

When I try to do a simple "hadoop fs -ls", I get
{code}
[hdfs@schu-enc hadoop-common]$ hadoop fs -ls
2014-07-21 19:35:14,486 ERROR [main] crypto.OpensslCipher 
(OpensslCipher.java:(87)) - Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version of 
Openssl new enough?
at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
at 
org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:84)
at 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.(OpensslAesCtrCryptoCodec.java:50)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:55)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:591)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:561)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2590)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2624)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2606)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:228)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:211)
at 
org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:194)
at org.apache.hadoop.fs.shell.Command.run(Command.java:155)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
2014-07-21 19:35:14,495 WARN  [main] crypto.CryptoCodec 
(CryptoCodec.java:getInstance(66)) - Crypto codec 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.
{code}

hadoop checknative shows an error

{code}
[hdfs@schu-enc ~]$ hadoop checknative
2014-07-21 19:38:38,376 INFO  [main] bzip2.Bzip2Factory 
(Bzip2Factory.java:isNativeBzip2Loaded(70)) - Successfully loaded & initialized 
native-bzip2 library system-native
2014-07-21 19:38:38,395 INFO  [main] zlib.ZlibFactory 
(ZlibFactory.java:(49)) - Successfully loaded & initialized native-zlib 
library
2014-07-21 19:38:38,411 ERROR [main] crypto.OpensslCipher 
(OpensslCipher.java:(87)) - Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError: Cannot find AES-CTR support, is your version of 
Openssl new enough?
at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
at 
org.apache.hadoop.crypto.OpensslCipher.(OpensslCipher.java:84)
at 
org.apache.hadoop.util.NativeLibraryChecker.main(NativeLibraryChecker.java:82)
Native library checking:
hadoop:  true /home/hdfs/hadoop-3.0.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
zlib:true /lib64/libz.so.1
snappy:  true /usr/lib64/libsnappy.so.1
lz4: true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: false 
{code}

Thanks to cmccabe who identified this issue as a bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10869) JavaKeyStoreProvider backing jceks file may get corrupted

2014-07-21 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10869:
---

 Summary: JavaKeyStoreProvider backing jceks file may get corrupted
 Key: HADOOP-10869
 URL: https://issues.apache.org/jira/browse/HADOOP-10869
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh


Currently, flush writes to the same file jceks file, if there is a failure 
during a write, the jceks file will be rendered unusable losing access to all 
keys stored in it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-07-21 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-10868:
--

 Summary: Create a ZooKeeper-backed secret provider
 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter


Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and can 
synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


jenkins test already sent out email notification, but appear to be stuck

2014-07-21 Thread Yongjun Zhang
HI,

I saw this couple of times lately: A jenkins test job have finished running
test, updating the corresponding jira and sending out email notification.
However, the test report is not ready at the the link sent in the email,
and the console link shows that the job is still running.

I wonder if anyone saw the same problem and has insight into what's going
on.

e.g. email notification received says:

Test results:
https://builds.apache.org/job/PreCommit-HADOOP-Build/4335//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HADOOP-Build/4335//console


Clicking on the testReport link sent me to

  https://builds.apache.org/job/PreCommit-HADOOP-Build/4335//

without test report, and Clicking on the console link sent me to:

https://builds.apache.org/job/PreCommit-HADOOP-Build/4335/consoleFull

which shows the job is still running:

Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to PreCommit-HADOOP-Build #4300
Archived 17 artifacts
Archive block size is 32768
Received 41 blocks and 7584340 bytes
Compression is 15.0%
Took 2.5 sec
Description set: HADOOP-10866
Recording test results
Publish JUnit test result report is waiting for a checkpoint on
PreCommit-HADOOP-Build #4334

  <== rotating circle indicating it's still running, maybe "waiting for a
checkpoint..."


I saw it will eventually get out of this state, but it takes very long time
(don't know how long). This symptom appears to be intermittent.

Thanks.

--Yongjun


[jira] [Resolved] (HADOOP-5602) existing Bzip2Codec supported in hadoop 0.19/0.20 skipps the input records when input bzip2 compressed files is made up of concatinating multiple .bz2 files.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5602.
--

Resolution: Fixed

> existing Bzip2Codec supported in hadoop 0.19/0.20 skipps the input records 
> when input bzip2 compressed files is made up of concatinating multiple .bz2 
> files. 
> --
>
> Key: HADOOP-5602
> URL: https://issues.apache.org/jira/browse/HADOOP-5602
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.19.0, 0.19.1
>Reporter: Suhas Gogate
>
> Until the Bzip2Codec supports concatenated compressed bzip files as input it 
> should detect it and throw an error to indicate input is not compatible...
> (see the related JIRA https://issues.apache.org/jira/browse/HADOOP-5601) 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5566) JobTrackerMetricsInst should probably keep track of JobFailed

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5566.
--

Resolution: Incomplete

I'm just going to close this as stale.

> JobTrackerMetricsInst should probably keep track of JobFailed
> -
>
> Key: HADOOP-5566
> URL: https://issues.apache.org/jira/browse/HADOOP-5566
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jerome Boulon
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5439) FileSystem.create() with overwrite param specified sometimes takes a long time to return.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5439.
--

Resolution: Fixed

> FileSystem.create() with overwrite param specified sometimes takes a long 
> time to return.
> -
>
> Key: HADOOP-5439
> URL: https://issues.apache.org/jira/browse/HADOOP-5439
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.19.0
>Reporter: He Yongqiang
>
> If a file already exists, it takes a long time for the overwrite create to 
> return.
> {code}
> fs.create(path_1, true);
> {code}
> sometimes takes a long time. 
> Instead, the code:
> {code}
> if (fs.exists(path_1))
> fs.delete(path_1);
> fs.create(path_1);
> {code}
> works pretty well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5436) job history directory grows without bound, locks up job tracker on new job submission

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5436.
--

Resolution: Fixed

> job history directory grows without bound, locks up job tracker on new job 
> submission
> -
>
> Key: HADOOP-5436
> URL: https://issues.apache.org/jira/browse/HADOOP-5436
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.19.0, 0.20.0, 0.20.1, 0.20.2
>Reporter: Tim Williamson
> Attachments: HADOOP-5436.patch
>
>
> An unpleasant surprise upgrading to 0.19: requests to jobtracker.jsp would 
> take a long time or even time out whenever new jobs where submitted.  
> Investigation showed the call to JobInProgress.initTasks() was calling 
> JobHistory.JobInfo.logSubmitted() which in turn was calling 
> JobHistory.getJobHistoryFileName() which was pegging the CPU for a couple 
> minutes.  Further investigation showed the were 200,000+ files in the job 
> history folder -- and every submission was creating a FileStatus for them 
> all, then applying a regular expression to just the name.  All this just on 
> the off chance the job tracker had been restarted (see HADOOP-3245).  To make 
> matters worse, these files cannot be safely deleted while the job tracker is 
> running, as the disappearance of a history file at the wrong time causes a 
> FileNotFoundException.
> So to summarize the issues:
> - having Hadoop default to storing all the history files in a single 
> directory is a Bad Idea
> - doing expensive processing of every history file on every job submission is 
> a Worse Idea
> - doing expensive processing of every history file on every job submission 
> while holding a lock on the JobInProgress object and thereby blocking the 
> jobtracker.jsp from rendering is a Terrible Idea (note: haven't confirmed 
> this, but a cursory glance suggests that's what's going on)
> - not being able to clean up the mess without taking down the job tracker is 
> just Unfortunate



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5402) TaskTracker ignores most RemoteExceptions from heartbeat processing

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5402.
--

Resolution: Incomplete

This is probably stale.

> TaskTracker ignores most RemoteExceptions from heartbeat processing
> ---
>
> Key: HADOOP-5402
> URL: https://issues.apache.org/jira/browse/HADOOP-5402
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.21.0
>Reporter: Owen O'Malley
>
> The code in TaskTracker.offerService looks like:
> {code}
>   } catch (RemoteException re) {
> String reClass = re.getClassName();
> if (DisallowedTaskTrackerException.class.getName().equals(reClass)) {
>   LOG.info("Tasktracker disallowed by JobTracker.");
>   return State.DENIED;
> }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5388) Add ids to tables in JSP pages to ease scraping of the data.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5388.
--

Resolution: Incomplete

I'm going to close this as stale.  First, the UIs have changed tremendously.  
Secondly, all of this data is now available via JSON, JMX, and various shell 
utilities.

> Add ids to tables in JSP pages to ease scraping of the data.
> 
>
> Key: HADOOP-5388
> URL: https://issues.apache.org/jira/browse/HADOOP-5388
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sreekanth Ramakrishnan
>Priority: Minor
> Attachments: HADOOP-5388.patch
>
>
> Currently, the tables which are generated by the JSP pages are lacking id's. 
> If the tables had id then it would ease the pain of writing scrapping 
> utilities.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5345) JobID is deprecated but there are no references to classes that are replacing it.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5345.
--

Resolution: Won't Fix

I suppose at this point, this is just stale given the deprecration -> 
undeprecation that happened.

> JobID is deprecated but there are no references to classes that are replacing 
> it.
> -
>
> Key: HADOOP-5345
> URL: https://issues.apache.org/jira/browse/HADOOP-5345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.20.0
>Reporter: Santhosh Srinivasan
>
> JobID is deprecated but there are no references to classes that are replacing 
> it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5312) Not all core javadoc are checked by Hudson

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5312.
--

Resolution: Fixed

With Mavenization, etc, I think this is fixed if not just stale now.

> Not all core javadoc are checked by Hudson
> --
>
> Key: HADOOP-5312
> URL: https://issues.apache.org/jira/browse/HADOOP-5312
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Tsz Wo Nicholas Sze
>
> Since "ant javadoc" does not generate all core javadocs, some javadocs (e.g. 
> HDFS javadocs) are not checked by Hudson.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Why Hadoop-trunk-commit always fails?

2014-07-21 Thread Andrew Wang
I dug around a bit with Tucu, and I think it's essentially the dependency
analyzer screwing up with snapshot artifacts. I found a different error for
HttpFS that looks similar:


[WARNING]
Dependency convergence error for
org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-hdfs:3.0.0-20140718.221409-4777

[WARNING] Rule 0:
org.apache.maven.plugins.enforcer.DependencyConvergence failed with
message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for
org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-hdfs:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-hdfs:3.0.0-20140718.221409-4777


You can see that it sees 3.0.0-SNAPSHOT being used for one, and
3.0.0-20140718.221409-4777 for the other (which causes the error). The same
thing happened in the stuff Ted posted, but for the KMS. Somehow the local
maven repo is getting screwed up non-deterministically.

Tucu recommends we remove this check from the post-commit build, and
instead make it part of the maven job used to build releases. At release
time, there shouldn't be any ambiguity about version numbers.

Any brave volunteers out there? I am not a maven maven, but am happy to
review pom.xml changes that do this, and I'll make sure the maven job used
to build releases still does the dep check.

Best,
Andrew




On Thu, Jul 17, 2014 at 9:50 PM, Ted Yu  wrote:

> Here is the warning from enforcer:
>
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for
> org.apache.hadoop:hadoop-auth:3.0.0-20140718.043141-4847 paths to
> dependency are:
> +-org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20140718.043141-4847
> and
> +-org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20140718.043201-4831
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-kms:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20140718.043201-4831
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> ]
>
> FYI
>
>
> On Thu, Jul 17, 2014 at 9:38 PM, Vinayakumar B 
> wrote:
>
> > Hi,
> > Hadoop-trunk-commit build always fails with message similar to below.
> > Anybody knows about this?
> >
> > [ERROR] Failed to execute goal
> > org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce
> > (depcheck) on project hadoop-yarn-server-tests: Some Enforcer rules
> > have failed. Look above for specific messages explaining why the rule
> > failed. -> [Help 1]
> >
> >
> >
> > Regards,
> > Vinay
> >
>


[jira] [Resolved] (HADOOP-5295) JobTracker can hold the list of lost TaskTrackers instead of removing them completely.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5295.
--

Resolution: Won't Fix

Won't Fix due to YARN.

> JobTracker can hold the list of lost TaskTrackers instead of removing them 
> completely.
> --
>
> Key: HADOOP-5295
> URL: https://issues.apache.org/jira/browse/HADOOP-5295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Vinod Kumar Vavilapalli
>
> Having the name and possibly the time for which it has been lost and 
> displaying the info via client UI as well as web UI will help in recognizing 
> problematic nodes easily and quickly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5261) HostsFileReader does not properly implement concurrency support

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5261.
--

Resolution: Fixed

Fixed.

> HostsFileReader does not properly implement concurrency support
> ---
>
> Key: HADOOP-5261
> URL: https://issues.apache.org/jira/browse/HADOOP-5261
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HADOOP-5261.patch
>
>
> As currently implemented, the class HostsFileReader does not properly allow 
> concurrent access. 
> It maintains two Sets and manipulates them within synchronized fields, but 
> provides accessor methods that publish unsynchronized access to the sets' 
> references (getHosts() and getExcludedHosts()).  The sets are implemented as 
> HashSets, which are not thread safe.  This can allow a method to obtain a 
> reference to a set that may be modified concurrently by the HostsFileReader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10545) hdfs zkfc NullPointerException

2014-07-21 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li resolved HADOOP-10545.
-

Resolution: Duplicate

> hdfs zkfc NullPointerException
> --
>
> Key: HADOOP-10545
> URL: https://issues.apache.org/jira/browse/HADOOP-10545
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
> Environment: Linux
>Reporter: Sebastien Barrier
>Priority: Minor
>
> Running hdfs zkfc on a node which is not a Namenode result of 
> NullPointerException.
> An error message should be displayed telling that zkfc must be run only on a 
> Namenode server and/or to verify configuration parameters.
> # hdfs zkfc -formatZK
> Exception in thread "main" java.lang.NullPointerException
> at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187)
> at 
> org.apache.hadoop.hdfs.tools.NNHAServiceTarget.(NNHAServiceTarget.java:57)
> at 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:128)
> at 
> org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:172)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5238) ivy publish and ivy integration does not cleanly work

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5238.
--

Resolution: Fixed

> ivy publish and ivy integration does not cleanly work
> -
>
> Key: HADOOP-5238
> URL: https://issues.apache.org/jira/browse/HADOOP-5238
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.19.0
>Reporter: Stefan Groschupf
>Assignee: Giridharan Kesavan
>Priority: Minor
>
> As far I understand the goal using ivy for hadoop is to be able intgrating 
> hadoop easily in thirdparty builds that uses transient dependency tools like 
> ivy or maven. 
> The way ivy is currently integrated a couple hick ups. 
> + the generated artifact files have names that can't be used for maven or 
> ivy. e.g. hadoop-version-core but standard would be hadoop-core-version. This 
> effects all those generated artifact files like hadoop-version-example etc. 
> This is caused by the use of ${final.name}-core.jar
> + This conflicts with the use of "${version}" in ivy.xml  organisation="org.apache.hadoop" module="${ant.project.name}" 
> revision="${version}">. The result will be a error report by ivy that found 
> artifact and defined artifact name are different.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5195) Unit tests for TestProxyUgiManager and TestHdfsProxy consistently failing on trunk builds

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5195.
--

Resolution: Fixed

Closing as stale.

> Unit tests for TestProxyUgiManager and TestHdfsProxy consistently failing on 
> trunk builds
> -
>
> Key: HADOOP-5195
> URL: https://issues.apache.org/jira/browse/HADOOP-5195
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.21.0
>Reporter: Lee Tucker
>Priority: Minor
>
> of the last 10 trunk builds, these unit tests have failed in about 50% of the 
> builds.   Trunk builds have been failing unit tests consistently for as far 
> back as I can see in hudson.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5179) FileSystem#copyToLocalFile shouldn't copy .crc files

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5179.
--

Resolution: Duplicate

Closing as a dupe then.

> FileSystem#copyToLocalFile shouldn't copy .crc files
> 
>
> Key: HADOOP-5179
> URL: https://issues.apache.org/jira/browse/HADOOP-5179
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.19.0
>Reporter: Nathan Marz
>
> The .crc files shouldn't be copied locally, as they are an internal Hadoop 
> filesystem thing. This is causing the following problem for me:
> I sometimes copy a directory from HDFS locally, modify those files, and then 
> reupload them somewhere else in HDFS. I then get checksum errors on the 
> re-upload.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5116) TestSocketIOWithTimeout fails under AIX - TIMEOUT error.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5116.
--

Resolution: Incomplete

Closing as stale.

> TestSocketIOWithTimeout fails under AIX - TIMEOUT error. 
> -
>
> Key: HADOOP-5116
> URL: https://issues.apache.org/jira/browse/HADOOP-5116
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.18.2
> Environment: AIX
>Reporter: Bill Habermaas
>Priority: Minor
> Attachments: javacore.20090126.144729.376858.0001.txt
>
>
> This test expects an exception to occur when read/writing a closed socket.  
> Under AIX this does not occur and results in a loop.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5115) TestLocalDirAllocator fails under AIX

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5115.
--

Resolution: Incomplete

Closing as stale.

> TestLocalDirAllocator fails under AIX 
> --
>
> Key: HADOOP-5115
> URL: https://issues.apache.org/jira/browse/HADOOP-5115
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.18.2
> Environment: AIX 
>Reporter: Bill Habermaas
>Priority: Minor
>
> TestLocalDirAllocator fails when running under AIX for the same reasons as 
> CYGWIN under Windows (as noted in the test source code comments).  AIX allows 
> the writing of a file in a directory that is marked read-only. This breaks 
> the test. If the test is changed to sense for AIX (as it does for windows) 
> then the usefulness of this unit test is questionable other than exposing an 
> interesting anomoly in the native file system. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5112) Upgrade Clover to 2.4.2 and enable Test Optimization in HADOOP

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5112.
--

Resolution: Incomplete

Closing as stale.

> Upgrade Clover to 2.4.2 and enable Test Optimization in HADOOP
> --
>
> Key: HADOOP-5112
> URL: https://issues.apache.org/jira/browse/HADOOP-5112
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Nick Pellow
>
> The current [Hadoop 
> build|http://hudson.zones.apache.org/hudson/view/Hadoop/job/Hadoop-trunk/clover/]
>  on Hudson is using Clover 1.3.13. 
> I will attach a patch to the build.xml and the clover 2.4.2 jar to this issue.
> Test Optimization works by only running tests for which code has changed. 
> This can be used both in a CI environment and on a developers machine to make 
> running tests faster.
> Clover will also run tests which previously failed. Modified source code is 
> detected at compile time and then used to select the tests that get run. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5109) TestKillCompletedJob is failing intermittetnly when run as part of test-core

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5109.
--

Resolution: Fixed

> TestKillCompletedJob is failing intermittetnly when run as part of test-core
> 
>
> Key: HADOOP-5109
> URL: https://issues.apache.org/jira/browse/HADOOP-5109
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jakob Homan
> Attachments: TEST-org.apache.hadoop.mapred.TestKillCompletedJob.txt
>
>
> TestKillCompletedJob fails most times when run as part of test-core, but 
> succeeds when run by itself.
> {noformat}
> Testcase: testKillCompJob took 8.048 sec
>   Caused an ERROR
> Job failed!
> java.io.IOException: Job failed!
>   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1343)
>   at 
> org.apache.hadoop.mapred.TestKillCompletedJob.launchWordCount(TestKillCompletedJob.java:78)
>   at 
> org.apache.hadoop.mapred.TestKillCompletedJob.testKillCompJob(TestKillCompletedJob.java:112)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5102) Split build script for building core, hdfs and mapred separately

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5102.
--

Resolution: Fixed

> Split build script for building core, hdfs and mapred separately
> 
>
> Key: HADOOP-5102
> URL: https://issues.apache.org/jira/browse/HADOOP-5102
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Sharad Agarwal
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5069) add a Hadoop-centric junit test result listener

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5069.
--

Resolution: Fixed

We replaced ant with maven. Closing as fixed.

> add a Hadoop-centric junit test result listener
> ---
>
> Key: HADOOP-5069
> URL: https://issues.apache.org/jira/browse/HADOOP-5069
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Priority: Minor
>
> People are encountering different problems with hadoop's unit tests, defects 
> currently being WONTFIX'd
> # HADOOP-5001 : Junit tests that time out don't write any test progress 
> related logs
> # HADOOP-4721 : OOM in .TestSetupAndCleanupFailure
> There is a root cause here, the XmlResultFormatter of Ant buffers everything 
> before writing out a DOM. Too much logged: OOM and no output. Timeout: kill 
> and no output.
> We could add a new logger class to hadoop and then push it back into Ant once 
> we were happy, or keep it separate if we had specific dependencies (like on 
> hadoop-dfs API) that they lacked. 
> Some ideas
> # stream XML to disk. We would have to put the test summary at the end; could 
> use XSL to generate HTML and the classic XML content
> # stream XHTML to disk. Makes it readable as you go along; makes the XSL work 
> afterwards harder.
> # push out results as records to a DFS. There's a problem here in that this 
> needs to be a different DFS from that you are testing, yet it needs to be 
> compatible with the client. 
> Item #3 would be interesting but doing it inside JUnit is too dangerous 
> classpath and config wise. Better to have Ant do the copy afterwards. What is 
> needed then is a way to easily append different tests to the same DFS file in 
> a way that tools can analyse them all afterwards. The copy is easy -add a new 
> Ant resource for that- but the choice of format is trickier.
> Here's some work I did on this a couple of years back; I've not done much 
> since then: 
> http://people.apache.org/~stevel/slides/distributed_testing_with_smartfrog_slides.pdf
> Is anyone else interested in exploring this? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5061) Update chinese documentation for default configuration

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5061.
--

Resolution: Incomplete

Closing this as stale.

> Update chinese documentation for default configuration
> --
>
> Key: HADOOP-5061
> URL: https://issues.apache.org/jira/browse/HADOOP-5061
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Sharad Agarwal
>
> The chinese documentation needs to be updated as per HADOOP-4828



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5059) 'whoami', 'topologyscript' calls failing with java.io.IOException: error=12, Cannot allocate memory

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5059.
--

Resolution: Fixed

I'm going to close this as fixed.

A bunch of things have happened:

a) On certain platforms, java now uses posix_spawn() instead of fork().

b) Topology can now be provided by a class.

c) The whoami call has been removed.

So there are definitely ways to mitigate/eliminate this issue.

> 'whoami', 'topologyscript' calls failing with java.io.IOException: error=12, 
> Cannot allocate memory
> ---
>
> Key: HADOOP-5059
> URL: https://issues.apache.org/jira/browse/HADOOP-5059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: On nodes with 
> physical memory 32G
> Swap 16G 
> Primary/Secondary Namenode using 25G of heap or more
>Reporter: Koji Noguchi
> Attachments: TestSysCall.java
>
>
> We've seen primary/secondary namenodes fail when calling whoami or 
> topologyscripts.
> (Discussed as part of HADOOP-4998)
> Sample stack traces.
> Primary Namenode
> {noformat}
> 2009-01-12 03:57:27,381 WARN org.apache.hadoop.net.ScriptBasedMapping: 
> java.io.IOException: Cannot run program
> "/path/topologyProgram" (in directory "/path"):
> java.io.IOException: error=12, Cannot allocate memory
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
> at org.apache.hadoop.util.Shell.run(Shell.java:134)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:286)
> at 
> org.apache.hadoop.net.ScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:122)
> at 
> org.apache.hadoop.net.ScriptBasedMapping.resolve(ScriptBasedMapping.java:73)
> at 
> org.apache.hadoop.dfs.FSNamesystem$ResolutionMonitor.run(FSNamesystem.java:1869)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: java.io.IOException: error=12, Cannot 
> allocate memory
> at java.lang.UNIXProcess.(UNIXProcess.java:148)
> at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
> ... 7 more
> 2009-01-12 03:57:27,381 ERROR org.apache.hadoop.fs.FSNamesystem: The resolve 
> call returned null! Using /default-rack
> for some hosts
> 2009-01-12 03:57:27,381 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
> new node: /default-rack/55.5.55.55:50010
> {noformat}
> Secondary Namenode
> {noformat}
> 2008-10-09 02:00:58,288 ERROR org.apache.hadoop.dfs.NameNode.Secondary: 
> java.io.IOException:
> javax.security.auth.login.LoginException: Login failed: Cannot run program 
> "whoami": java.io.IOException:
> error=12, Cannot allocate memory
> at 
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:250)
> at 
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:275)
> at 
> org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:257)
> at 
> org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:370)
> at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:359)
> at 
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
> at 
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
> at 
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
> at java.lang.Thread.run(Thread.java:619)
> at 
> org.apache.hadoop.dfs.FSNamesystem.setConfigurationParameters(FSNamesystem.java:372)
> at org.apache.hadoop.dfs.FSNamesystem.(FSNamesystem.java:359)
> at 
> org.apache.hadoop.dfs.SecondaryNameNode.doMerge(SecondaryNameNode.java:340)
> at 
> org.apache.hadoop.dfs.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:312)
> at 
> org.apache.hadoop.dfs.SecondaryNameNode.run(SecondaryNameNode.java:223)
> at java.lang.Thread.run(Thread.java:619)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4998) Implement a native OS runtime for Hadoop

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4998.
--

Resolution: Fixed

Closing as fixed as libhadoop.so has all sorts of random ... uhh.. stuff in it 
now.

> Implement a native OS runtime for Hadoop
> 
>
> Key: HADOOP-4998
> URL: https://issues.apache.org/jira/browse/HADOOP-4998
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: native
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
> Attachments: hadoop-4998-1.patch
>
>
> It would be useful to implement a JNI-based runtime for Hadoop to get access 
> to the native OS runtime. This would allow us to stop relying on exec'ing 
> bash to get access to information such as user-groups, process limits etc. 
> and for features such as chown/chgrp (org.apache.hadoop.util.Shell).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4953) config property mapred.child.java.opts has maximum length that generates NoClassDefFoundError if exceeded

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4953.
--

Resolution: Won't Fix

I'm closing this as won't fix since this parameter is mostly deprecated in lieu 
of map and reduce specific environment vars.

> config property mapred.child.java.opts has maximum length that generates 
> NoClassDefFoundError if exceeded
> -
>
> Key: HADOOP-4953
> URL: https://issues.apache.org/jira/browse/HADOOP-4953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.19.0
> Environment: Amazon EC2 Ubuntu 8.04 hardy AMI" (Debian version 
> "lenny/sid") 
> JDK 1.6.0_07-b06 from Sun
> kernel.ostype = Linux
> kernel.osrelease = 2.6.21.7-2.fc8xen
> kernel.version = #1 SMP Fri Feb 15 12:39:36 EST 2008
> powernow-k8: Found 1 Dual-Core AMD Opteron(tm) Processor 2218 HE processors 
> (version 2.00.00)
>Reporter: Paul Baclace
>
> There is an unexpected max length for the value of config property 
> mapred.child.java.opts that, if exceeded, generates an opaque 
> NoClassDefFoundError in child tasks.  
> The max length for the value is 146 chars.  A length of 147 chars will cause 
> the exception.  For example, adding a single extra space between options will 
> convert a working jvm opts clause into one that always generates 
> NoClassDefFoundError when tasktrackers exec child tasks.
> As laboriously diagnosed, conf/hadoop-site.xml  was used to set the property 
> and runs were done on "Amazon EC2 Ubuntu 8.04 hardy AMI" (Debian version 
> "lenny/sid") using java 1.6.0_07-b06.  Multiple slaves nodes were used and 
> after conf changes, stop-all.sh and start-all.sh were run before each test.  
> The job config props as found on the slave did not appear to have a truncated 
> or damaged value.  It made no difference whether @taskid@ appeared at the end 
> or middle of the options and absence of @taskid@ did not eliminate the 
> problem.
> This bug wastes considerable time because the error looks like a classpath 
> problem and even after the java opts property is suspected, a character 
> quoting or unsupported option seems more likely than a length limit.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4917) fs -lsr does not align correctly when the username lengths are different

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4917.
--

Resolution: Fixed

Closing this as fixed, as -R was added.

> fs -lsr does not align correctly when the username lengths are different
> 
>
> Key: HADOOP-4917
> URL: https://issues.apache.org/jira/browse/HADOOP-4917
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>  Labels: newbie
>
> For example,
> {noformat}
> bash-3.2$ ./bin/hadoop fs -lsr /user
> drwx--   - nicholas supergroup  0 2008-12-18 15:17 /user/nicholas
> -rw-r--r--   3 nn_sze supergroup   1366 2008-12-18 15:17 
> /user/nicholas/a.txt
> drwx--   - tsz  supergroup  0 2008-11-25 15:55 /user/tsz
> -rw---   3 tsz supergroup   1366 2008-11-25 15:53 /user/tsz/r.txt
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4898) findbugs target and docs target which uses forrest is yet to be ported using IVY

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4898.
--

Resolution: Won't Fix

The forrest was cut down and the ants were eaten by birds.

> findbugs target and docs target which uses forrest is yet to be ported using 
> IVY 
> -
>
> Key: HADOOP-4898
> URL: https://issues.apache.org/jira/browse/HADOOP-4898
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Giridharan Kesavan
>
> findbugs ant target is yet to be ported to use IVY for dependency management.
> The reason being that , ivy can be used for resolving the findbugs.jar file 
> but  the test-patch ant target uses the findbbugs bin directory.  
> docs ant target 
> docs uses forrest and java5, 
> forrest artifacts are unavailable on the centralized repo to be used through 
> ivy.
> Thanks,
> Giri



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4900) some dependencies are yet to be resolved using IVY

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4900.
--

Resolution: Won't Fix

Likely stale given all the rework on the build process.

> some dependencies are yet to be resolved using IVY 
> ---
>
> Key: HADOOP-4900
> URL: https://issues.apache.org/jira/browse/HADOOP-4900
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build
>Reporter: Giridharan Kesavan
>
> Though we are using ivy for resolving dependencies, not all the dependencies 
> are resolved through IVy.
> The reason is the unavailability of the appropriate version of the artifacts 
> in the repository and the ambiguity of the version of the dependencies
> At the moment beloe is the list of dependencies which are still resolved from 
> the local lib directories.  
> under the lib folder
> commons-cli-2.0-SNAPSHOT.jar   - yet to be available in the centralized repo
> kfs-0.2.2.jar - not available in the 
> maven repo.
> hsqldb-1.8.0.10.jar   - latest available version is 
> under the lib/jsp-2.1 folder
> jsp-2.1.jar  - version # unknown  
> jsp-api-2.1.jar  - version # unknown
> under src/test/lib/  folder 
> ftplet-api-1.0.0-SNAPSHOT.jar   -  unavailable in 
> the maven repo
> ftpserver-server-1.0.0-SNAPSHOT.jar   -  unavailable in the 
> maven repo
> ftpserver-core-1.0.0-SNAPSHOT.jar  -  unavailable in the 
> maven repo
> mina-core-2.0.0-M2-20080407.124109-12.jar-  unavailable in the maven repo
> under src/contrib/chukwa/lib 
> json.jar   - version # 
> unknown
> under src/contrib/thriftfs/lib  
> libthrift.jar   -  unavailable in the 
> maven repo.
> Thanks,
> Giri



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4870) Ant test-patch goal (and test/bin/test-patch.sh script) fails without warnings if ANT_HOME environment variable is not set

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4870.
--

Resolution: Won't Fix

We've moved away from ant. Closing.

> Ant test-patch goal (and test/bin/test-patch.sh script) fails without 
> warnings if ANT_HOME environment variable is not set
> --
>
> Key: HADOOP-4870
> URL: https://issues.apache.org/jira/browse/HADOOP-4870
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.18.2
> Environment: Ubuntu linux 8.10
>Reporter: Francesco Salbaroli
>Priority: Minor
>
> A call to "ant test-patch" fails if ANT_HOME is not set as an environment 
> variable.
> No errors if variable set with export or in /etc/environment



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4853) Improvement to IPC

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4853.
--

Resolution: Incomplete

A lot of this has changed with the move to PB. Closing as stale.

> Improvement to IPC
> --
>
> Key: HADOOP-4853
> URL: https://issues.apache.org/jira/browse/HADOOP-4853
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 0.20.0
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> I'd like to propose an improvement for consideration given my experience of 
> working on HADOOP-4348:
> Currently the first call doubles up as a 'connection setup' trigger. I'd like 
> to propose adding a new 'init' call which is always called by the clients for 
> connection setup. The advantages are:
> * We could fold in the getProtocolVersion call into the setup call, this 
> ensures that the Server always checks for protocol versions, regardless of 
> whether the (malicious?) client does an explicit call for getProtocolVersion 
> or not.
> * We could authorize the connection here.
> * We could add to check to ensure that the Server instance actually 
> implements the protocol used by the client to communicate, rather than fail 
> on the first IPC call
> The flip side being an extra round-trip.
> Lets discuss.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4850) Fix IPC Client to not use UGI

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4850.
--

Resolution: Incomplete

I'm going to close this as stale, since it is probably too late.

> Fix IPC Client to not use UGI
> -
>
> Key: HADOOP-4850
> URL: https://issues.apache.org/jira/browse/HADOOP-4850
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 0.20.0
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> Hadoop embraced JAAS via HADOOP-4348.
> We need to fix IPC Client to use standard features of JAAS such as 
> LoginContext, Subject etc. rather than UserGroupInformation in the IPC 
> header, Client.Connection etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4815) S3FileSystem.renameRecursive(..) does not work correctly if src contains Java regular expression special characters

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4815.
--

Resolution: Incomplete

S3 implementation was replaced.  Closing as stale.

> S3FileSystem.renameRecursive(..) does not work correctly if src contains Java 
> regular expression special characters
> ---
>
> Key: HADOOP-4815
> URL: https://issues.apache.org/jira/browse/HADOOP-4815
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>
> In S3FileSystem, the variable srcPath is not supposed to be a regular 
> expression but is used as a regular expression in the line below.
> {code}
> Path newDst = new Path(oldSrcPath.replaceFirst(srcPath, dstPath));
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4813) Avoid a buffer copy while replying to RPC requests.

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4813.
--

Resolution: Incomplete

Closing as stale given the RPC engine has been replaced.

> Avoid a buffer copy while replying to RPC requests.
> ---
>
> Key: HADOOP-4813
> URL: https://issues.apache.org/jira/browse/HADOOP-4813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Raghu Angadi
>  Labels: newbie
>
> RPC server first serializes RPC response to a ByteArrayOutputStream and then 
> creates a new array to write to socket. For most responses the RPC handler is 
> able to write the entire response in-line. If we could use the same buffer 
> used by ByteArrayOutputStream, we can avoid this copy. 
> As mentioned in HADOOP-4802, yet another copy could be avoided (in most 
> cases) if we use a static direct buffer for the responses (not proposed for 
> this jira).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4776) Windows installation fails with "bin/hadoop: line 243: c:\Program: command not found"

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4776.
--

Resolution: Won't Fix

Closing this as won't fix now that Hadoop ships with batch files.

> Windows installation fails with "bin/hadoop: line 243: c:\Program: command 
> not found"
> -
>
> Key: HADOOP-4776
> URL: https://issues.apache.org/jira/browse/HADOOP-4776
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.19.0
> Environment: Windows
>Reporter: m costello
>Priority: Minor
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Perhaps a space in the path name is confusing Cygwin.   The JAVA_HOME path is 
> the default  "C:\Program Files\Java\jdk1.6.0_11".
> Changing
>   JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} 
> org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
> to
>   JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} "${JAVA}" 
> org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
> appear to correct the problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4603) Installation on Solaris needs additional PATH setting

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4603.
--

Resolution: Won't Fix

Now I really am going to close this as Won't Fix, especially since we finally 
removed the requirement for whoami.

> Installation on Solaris needs additional PATH setting
> -
>
> Key: HADOOP-4603
> URL: https://issues.apache.org/jira/browse/HADOOP-4603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.18.2
> Environment: Solaris 10 x86
>Reporter: Jon Brisbin
> Attachments: HADOOP-4603, id_instead_of_whoami.diff
>
>
> A default installation as outlined in the docs won't start on Solaris 10 x86. 
> The "whoami" utility is in path "/usr/ucb" on Solaris 10, which isn't in the 
> standard PATH environment variable unless the user has added that 
> specifically. The documentation should reflect this.
> Solaris 10 also seemed to throw NPEs if you didn't explicitly set the IP 
> address to bind the servers to. Simply overriding the IP address fixes the 
> problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4594) Monitoring Scripts for Nagios

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4594.
--

Resolution: Won't Fix

While I believe that such a collection would be valuable, it is no longer 
appropriate to ship these with Hadoop with the general death of contrib.  It'd 
better for something like Apache Extras or maybe Big Top or ... Closing.

> Monitoring Scripts for Nagios
> -
>
> Key: HADOOP-4594
> URL: https://issues.apache.org/jira/browse/HADOOP-4594
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Edward Capriolo
>Priority: Minor
> Attachments: HADOOP-4594.patch
>
>
> I would like to create a set of local via NRPE and remote check scripts that 
> can be shipped with the hadoop distribution and used to monitor Hadoop. I 
> already have completed the NRPE scripts. The second set of scripts would use 
> wget to read the output of the hadoop web interfaces. Do these already exist?
> I guess these would fall under a new contrib project.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4582) create-hadoop-image doesn't fail with expired Java binary URL

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4582.
--

Resolution: Won't Fix

We removed this stuff a long time ago. Closing.

> create-hadoop-image doesn't fail with expired Java binary URL
> -
>
> Key: HADOOP-4582
> URL: https://issues.apache.org/jira/browse/HADOOP-4582
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: contrib/cloud
>Affects Versions: 0.18.1
>Reporter: Karl Anderson
>Priority: Minor
>
> Part of creating a Hadoop EC2 image involves putting the URL for the Java 
> binary into hadoop-ec2-env.sh.  Ths URL is time-sensitive; a working URL will 
> eventually redirect to a HTML warning page.  create-hadoop-image-remote does 
> not notice this, and will create, bundle, and register a non-working image, 
> which launch-cluster will launch, but on which the hadoop commands will not 
> work.
> To fix, check the output status of the "sh java.bin" command in 
> create-hadoop-image-remote, die with that status, and check for that status 
> when create-hadoop-image-remote is run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4522) Capacity Scheduler needs to re-read its configuration

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4522.
--

Resolution: Fixed

Fixed a while back.

> Capacity Scheduler needs to re-read its configuration
> -
>
> Key: HADOOP-4522
> URL: https://issues.apache.org/jira/browse/HADOOP-4522
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Vivek Ratan
> Attachments: 4522.1.patch
>
>
> An external application (an Ops script, or some CLI-based tool) can change 
> the configuration of the Capacity Scheduler (change the capacities of various 
> queues, for example) by updating its config file. This application then needs 
> to tell the Capacity Scheduler that its config has changed, which causes the 
> Scheduler to re-read its configuration. It's possible that the Capacity 
> Scheduler may need to interact with external applications in other similar 
> ways. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-4503) ant jar when run for first time does not inclue version information

2014-07-21 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-4503.
--

Resolution: Incomplete

Closing as stale.

> ant jar when run for first time does not inclue version information
> ---
>
> Key: HADOOP-4503
> URL: https://issues.apache.org/jira/browse/HADOOP-4503
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: linux builds and windows builds
>Reporter: Sreekanth Ramakrishnan
>  Labels: newbie
> Attachments: antfile, antfile1
>
>
> Ant jar when run for first time does not include version information.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Build failed in Jenkins: Hadoop-Common-0.23-Build #1017

2014-07-21 Thread Apache Jenkins Server
See 

--
[...truncated 8295 lines...]
Running org.apache.hadoop.fs.TestFileSystemTokens
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.55 sec
Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.893 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextSymlink
Tests run: 61, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 2.612 sec
Running org.apache.hadoop.fs.TestHarFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.396 sec
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.593 sec
Running org.apache.hadoop.fs.TestLocalDirAllocator
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.113 sec
Running org.apache.hadoop.fs.TestLocalFileSystemPermission
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.587 sec
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.923 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.802 sec
Running org.apache.hadoop.fs.TestPath
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.849 sec
Running org.apache.hadoop.fs.TestListFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.644 sec
Running org.apache.hadoop.fs.TestHarFileSystemBasics
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.315 sec
Running org.apache.hadoop.fs.TestChecksumFileSystem
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.009 sec
Running org.apache.hadoop.fs.TestGetFileBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.765 sec
Running org.apache.hadoop.fs.TestFsShellCopy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.244 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.199 sec
Running org.apache.hadoop.fs.TestAvroFSInput
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.486 sec
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.114 sec
Running org.apache.hadoop.fs.shell.TestCommandFactory
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.207 sec
Running org.apache.hadoop.fs.shell.TestPathData
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.852 sec
Running org.apache.hadoop.fs.shell.TestCopy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.953 sec
Running org.apache.hadoop.fs.TestHardLink
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec
Running org.apache.hadoop.fs.TestFilterFileSystem
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.706 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextMainOperations
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.133 sec
Running org.apache.hadoop.fs.TestTrash
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.164 sec
Running org.apache.hadoop.fs.viewfs.TestChRootedFileSystem
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.175 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemDelegation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.514 sec
Running org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.823 sec
Running org.apache.hadoop.fs.viewfs.TestFcCreateMkdirLocalFs
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.164 sec
Running 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.331 sec
Running org.apache.hadoop.fs.viewfs.TestChRootedFs
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.054 sec
Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.354 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.937 sec
Running org.apache.hadoop.fs.viewfs.TestViewfsFileStatus
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.787 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.844 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.978 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.524 sec
Running org.apache.hadoop.fs.viewfs.TestFcPermissionsLocalFs
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0,