Deprecated configuration settings set from the core code / {core,hdfs,...}-default.xml ??

2014-08-21 Thread Niels Basjes
Hi,

I found this because I was wondering why simply starting something as
trivial as the pig grunt gives the following messages during startup:

2014-08-21 09:36:55,171 [main] INFO
 org.apache.hadoop.conf.Configuration.deprecation - *mapred.job.tracker is
deprecated*. Instead, use mapreduce.jobtracker.address
2014-08-21 09:36:55,172 [main] INFO
 org.apache.hadoop.conf.Configuration.deprecation - *fs.default.name
http://fs.default.name is deprecated*. Instead, use fs.defaultFS

What I found is that these settings are not part of my config but they are
part of the 'core hadoop' files.

I found that the mapred.job.tracker is set from code when using the mapred
package (probably this is what pig uses)
https://github.com/apache/hadoop-common/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java#L869

and that the fs.default.name is explicitly defined here as 'deprecated' in
one of the *-default.xml config files.
https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml#L524

I did some more digging and found that there are several other properties
that have been defined as deprecated that are still present in the various
*-default.xml files throughout the hadoop code base.

I used this list as a reference:
https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/site/apt/DeprecatedProperties.apt.vm

The ones I found so far:
./hadoop-common-project/hadoop-common/src/main/resources/core-default.xml:
 namefs.default.name/name
./hadoop-common-project/hadoop-common/src/main/resources/core-default.xml:
 nameio.bytes.per.checksum/name
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml:
namemapreduce.job.counters.limit/name
./hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml:
 namemapred.job.map.memory.mb/name
./hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml:
 namemapred.job.reduce.memory.mb/name
./hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml:
 namemapreduce.reduce.class/name

Seems to me fixing these removes a lot of senseless clutter in the
messaging in the console for end users.

Or is there a good reason to keep it like this?

-- 
Best regards / Met vriendelijke groeten,

Niels Basjes


[jira] [Created] (HADOOP-10991) 'hadoop namenode -format' fails if user hadoop homedir is not under /home

2014-08-21 Thread Yaniv Kaul (JIRA)
Yaniv Kaul created HADOOP-10991:
---

 Summary: 'hadoop namenode -format' fails if user hadoop homedir is 
not under /home
 Key: HADOOP-10991
 URL: https://issues.apache.org/jira/browse/HADOOP-10991
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.0
 Environment: CentOS 6.5
Reporter: Yaniv Kaul
Priority: Minor


Since my homedir is on shared NFS, I couldn't create a user for hadoop on 
/home. Therefore, I've used:
useradd hadoop --home /hadoop

which worked well. I've adjusted HADOOP_HOME and friends to match correctly. 
Running hdfs namenode -format failed:
{noformat}
14/08/21 13:57:35 INFO namenode.NNConf: XAttrs enabled? true
14/08/21 13:57:35 INFO namenode.NNConf: Maximum size of an xattr: 16384
14/08/21 13:57:35 INFO namenode.FSImage: Allocated new BlockPoolId: 
BP-1696511243-10.103.234.197-1408618655940
14/08/21 13:57:35 WARN namenode.NameNode: Encountered exception during format: 
java.io.IOException: Cannot create directory 
/home/hadoop/hadoopdata/hdfs/namenode/current
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:334)
at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546)
at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:926)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
14/08/21 13:57:35 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory 
/home/hadoop/hadoopdata/hdfs/namenode/current
at 
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:334)
at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:546)
at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:567)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:148)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:926)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1354)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
14/08/21 13:57:35 INFO util.ExitUtil: Exiting with status 1
14/08/21 13:57:35 INFO namenode.NameNode: SHUTDOWN_MSG: 
/
SHUTDOWN_MSG: Shutting down NameNode at 
lgdrm432.xiodrm.lab.emc.com/10.103.234.197
{noformat}

The error seems to be clear:
{noformat}
java.io.IOException: Cannot create directory 
/home/hadoop/hadoopdata/hdfs/namenode/current
{noformat}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Build failed in Jenkins: Hadoop-Common-0.23-Build #1048

2014-08-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1048/

--
[...truncated 8284 lines...]
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.141 sec
Running org.apache.hadoop.io.TestMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.271 sec
Running org.apache.hadoop.io.TestText
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.334 sec
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.719 sec
Running org.apache.hadoop.io.serializer.TestSerializationFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.524 sec
Running org.apache.hadoop.io.serializer.avro.TestAvroSerialization
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.108 sec
Running org.apache.hadoop.io.serializer.TestWritableSerialization
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.439 sec
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.075 sec
Running org.apache.hadoop.io.TestArrayFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.222 sec
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.286 sec
Running org.apache.hadoop.io.TestIOUtils
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.573 sec
Running org.apache.hadoop.io.TestSetFile
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.659 sec
Running org.apache.hadoop.io.TestSequenceFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.591 sec
Running org.apache.hadoop.io.TestObjectWritableProtos
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.691 sec
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.457 sec
Running org.apache.hadoop.io.TestArrayWritable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.106 sec
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.458 sec
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.402 sec
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.798 sec
Running org.apache.hadoop.io.TestSecureIOUtils
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.984 sec
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.458 sec
Running org.apache.hadoop.io.file.tfile.TestTFileStreams
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.8 sec
Running org.apache.hadoop.io.file.tfile.TestTFileUnsortedByteArrays

[jira] [Created] (HADOOP-10992) Merge KMS to branch-2

2014-08-21 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10992:
---

 Summary: Merge KMS to branch-2
 Key: HADOOP-10992
 URL: https://issues.apache.org/jira/browse/HADOOP-10992
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


A pre-requisite for getting HDFS encryption in branch-2 is KMS, we need to 
merge all related JIRAs:

{code}
052932e7299ff64d36287b368f94ccf8698d5c9d HADOOP-10141. Create KeyProvider API 
to separate encryption key storage from the applications. (omalley)
b72026617b038f588581d43c323718fe8120b400 HADOOP-10201. Add listing to 
KeyProvider API. (Larry McCay via omalley)
4a178b6736d54e1b1940babd7cbda34921957d01 HADOOP-10177. Create CLI tools for 
managing keys. (Larry McCay via omalley)
0cf6ccf606fceb6c06f35d72b2c2b679d71ad96c HADOOP-10237. JavaKeyStoreProvider 
needs to set keystore permissions correctly. (Larry McCay via omalley)
56d349b81d24ef1421ffcdfb822a8fe122f05c80 HADOOP-10432. Refactor SSLFactory to 
expose static method to determine HostnameVerifier. (tucu)
0d3cb277937eb7ec6a281dc7f236efe387fd HADOOP-10429. KeyStores should have 
methods to generate the materials themselves, KeyShell should use them. (tucu)
d9c1c42fdfddb810ebe2ec151f751d05e987f25e HADOOP-10427. KeyProvider 
implementations should be thread safe. (tucu)
98be41ff908acd2fa55c0b302c8a3def55987e41 HADOOP-10428. JavaKeyStoreProvider 
should accept keystore password via configuration falling back to ENV VAR. 
(tucu)
b2b05181682c2a55f5ed1cfa2c44f3390eebd5c4 HADOOP-10244. TestKeyShell improperly 
tests the results of delete (Larry McCay via omalley)
83f057e8e1d16949b94fe2e99f4232ced8156e6a HADOOP-10430. KeyProvider Metadata 
should have an optional description, there should be a method to retrieve the 
metadata from all keys. (tucu)
f6f52ca1c2df57d13fa596e074accc0f3549ff58 HADOOP-10431. Change visibility of 
KeyStore.Options getter methods to public. (tucu)
05e59fd8058f21a52d4a268af3a189c89ebad2fe HADOOP-10534. KeyProvider 
getKeysMetadata should take a list of names rather than returning all keys. 
(omalley)
16be41a63e4b3bd79b1cee4edce6df374666ca58 HADOOP-10433. Key Management Server 
based on KeyProvider API. (tucu)
4bcaa45a2ea36fb440069c7a458cdc225cb862ca HADOOP-10583. bin/hadoop key throws 
NPE with no args and assorted other fixups. (clamb via tucu)
1727e235c3d3317b2ac6d7c25ea01505853653ca HADOOP-10586. KeyShell doesn't allow 
setting Options via CLI. (clamb via tucu)
6b410f3b2e185fca963c7db664395e97d76cd6ee HADOOP-10645. TestKMS fails because 
race condition writing acl files. (tucu)
7868054902590af6dbda941f2cc8324267c8bef8 HADOOP-10611. KMS, keyVersion name 
should not be assumed to be keyName@versionNumber. (tucu)
725f087f3f2fc31190810344d0e508e34b4a126e HADOOP-10607. Create API to separate 
credential/password storage from applications. (Larry McCay via omalley)
097254f094b004404ba4754f97f906f46a12b0e4 HADOOP-10696. Add optional attributes 
to KeyProvider Options and Metadata. (tucu)
a283b91add9e9230b9597fd33355822517a1852e HADOOP-10695. KMSClientProvider should 
respect a configurable timeout. (yoderme via tucu)
6cef126f29673704c345c52995890ff48395ec1a HADOOP-10757. KeyProvider KeyVersion 
should provide the key name. (asuresh via tucu)
9b7a1cb122c6a6041e718986085ec7f6bab422c4 HADOOP-10719. Add generateEncryptedKey 
and decryptEncryptedKey methods to KeyProvider. (asuresh via tucu)
9c03a4b321db7950d5652ba03022f9ee3ebd2d6f HADOOP-10769. Create KeyProvider 
extension to handle delegation tokens. Contributed by Arun Suresh.
db91ab3d02fddfd325fd308e46f65075c2c6cd93 HADOOP-10812. Delegate 
KeyProviderExtension#toString to underlying KeyProvider. (wang)
7c7911bbd63d30932df71af536f45c20adba88ff HADOOP-10736. Add key attributes to 
the key shell. Contributed by Mike Yoder.
cfb5943d356fef911f424ed8250a9c02b706ecc6 HADOOP-10824. Refactor KMSACLs to 
avoid locking. (Benoy Antony via umamahesh)
6b9b985233c293d22f89a4deadf871230f09d7ed HADOOP-10816. KeyShell returns -1 on 
error to the shell, should be 1. (Mike Yoder via wang)
ceea01cff5762115c58817ab696cd11641bc9a98 HADOOP-10841. EncryptedKeyVersion 
should have a key name property. (asuresh via tucu)
468a4fc00921ea7bc61bb60666e9352b0ad3928b HADOOP-10842. CryptoExtension 
generateEncryptedKey method should receive the key name. (asuresh via tucu)
c6d60c6db8b22d6dc45e63073bc5bb52dc041a8c HADOOP-10750. KMSKeyProviderCache 
should be in hadoop-common. (asuresh via tucu)
c3eca9f2504ed619a3edcf3d3eafc286133911d0 HADOOP-10720. KMS: Implement 
generateEncryptedKey and decryptEncryptedKey in the REST API. (asuresh via tucu)
6ae46e601290a094019fdd8e241a90a6f269203c HADOOP-10826. Iteration on 
KeyProviderFactory.serviceLoader is thread-unsafe. (benoyantony viat tucu)
22bbb1e1b1ad076cb2cac22b7863904aea903586 HADOOP-10881. Clarify usage of 
encryption and encrypted 

[jira] [Resolved] (HADOOP-10992) Merge KMS to branch-2

2014-08-21 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur resolved HADOOP-10992.
-

   Resolution: Fixed
Fix Version/s: 2.6.0

Completed.

 Merge KMS to branch-2
 -

 Key: HADOOP-10992
 URL: https://issues.apache.org/jira/browse/HADOOP-10992
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0


 A pre-requisite for getting HDFS encryption in branch-2 is KMS, we need to 
 merge all related JIRAs:
 {code}
 052932e7299ff64d36287b368f94ccf8698d5c9d HADOOP-10141. Create KeyProvider API 
 to separate encryption key storage from the applications. (omalley)
 b72026617b038f588581d43c323718fe8120b400 HADOOP-10201. Add listing to 
 KeyProvider API. (Larry McCay via omalley)
 4a178b6736d54e1b1940babd7cbda34921957d01 HADOOP-10177. Create CLI tools for 
 managing keys. (Larry McCay via omalley)
 0cf6ccf606fceb6c06f35d72b2c2b679d71ad96c HADOOP-10237. JavaKeyStoreProvider 
 needs to set keystore permissions correctly. (Larry McCay via omalley)
 56d349b81d24ef1421ffcdfb822a8fe122f05c80 HADOOP-10432. Refactor SSLFactory to 
 expose static method to determine HostnameVerifier. (tucu)
 0d3cb277937eb7ec6a281dc7f236efe387fd HADOOP-10429. KeyStores should have 
 methods to generate the materials themselves, KeyShell should use them. (tucu)
 d9c1c42fdfddb810ebe2ec151f751d05e987f25e HADOOP-10427. KeyProvider 
 implementations should be thread safe. (tucu)
 98be41ff908acd2fa55c0b302c8a3def55987e41 HADOOP-10428. JavaKeyStoreProvider 
 should accept keystore password via configuration falling back to ENV VAR. 
 (tucu)
 b2b05181682c2a55f5ed1cfa2c44f3390eebd5c4 HADOOP-10244. TestKeyShell 
 improperly tests the results of delete (Larry McCay via omalley)
 83f057e8e1d16949b94fe2e99f4232ced8156e6a HADOOP-10430. KeyProvider Metadata 
 should have an optional description, there should be a method to retrieve the 
 metadata from all keys. (tucu)
 f6f52ca1c2df57d13fa596e074accc0f3549ff58 HADOOP-10431. Change visibility of 
 KeyStore.Options getter methods to public. (tucu)
 05e59fd8058f21a52d4a268af3a189c89ebad2fe HADOOP-10534. KeyProvider 
 getKeysMetadata should take a list of names rather than returning all keys. 
 (omalley)
 16be41a63e4b3bd79b1cee4edce6df374666ca58 HADOOP-10433. Key Management Server 
 based on KeyProvider API. (tucu)
 4bcaa45a2ea36fb440069c7a458cdc225cb862ca HADOOP-10583. bin/hadoop key throws 
 NPE with no args and assorted other fixups. (clamb via tucu)
 1727e235c3d3317b2ac6d7c25ea01505853653ca HADOOP-10586. KeyShell doesn't allow 
 setting Options via CLI. (clamb via tucu)
 6b410f3b2e185fca963c7db664395e97d76cd6ee HADOOP-10645. TestKMS fails because 
 race condition writing acl files. (tucu)
 7868054902590af6dbda941f2cc8324267c8bef8 HADOOP-10611. KMS, keyVersion name 
 should not be assumed to be keyName@versionNumber. (tucu)
 725f087f3f2fc31190810344d0e508e34b4a126e HADOOP-10607. Create API to separate 
 credential/password storage from applications. (Larry McCay via omalley)
 097254f094b004404ba4754f97f906f46a12b0e4 HADOOP-10696. Add optional 
 attributes to KeyProvider Options and Metadata. (tucu)
 a283b91add9e9230b9597fd33355822517a1852e HADOOP-10695. KMSClientProvider 
 should respect a configurable timeout. (yoderme via tucu)
 6cef126f29673704c345c52995890ff48395ec1a HADOOP-10757. KeyProvider KeyVersion 
 should provide the key name. (asuresh via tucu)
 9b7a1cb122c6a6041e718986085ec7f6bab422c4 HADOOP-10719. Add 
 generateEncryptedKey and decryptEncryptedKey methods to KeyProvider. (asuresh 
 via tucu)
 9c03a4b321db7950d5652ba03022f9ee3ebd2d6f HADOOP-10769. Create KeyProvider 
 extension to handle delegation tokens. Contributed by Arun Suresh.
 db91ab3d02fddfd325fd308e46f65075c2c6cd93 HADOOP-10812. Delegate 
 KeyProviderExtension#toString to underlying KeyProvider. (wang)
 7c7911bbd63d30932df71af536f45c20adba88ff HADOOP-10736. Add key attributes to 
 the key shell. Contributed by Mike Yoder.
 cfb5943d356fef911f424ed8250a9c02b706ecc6 HADOOP-10824. Refactor KMSACLs to 
 avoid locking. (Benoy Antony via umamahesh)
 6b9b985233c293d22f89a4deadf871230f09d7ed HADOOP-10816. KeyShell returns -1 on 
 error to the shell, should be 1. (Mike Yoder via wang)
 ceea01cff5762115c58817ab696cd11641bc9a98 HADOOP-10841. EncryptedKeyVersion 
 should have a key name property. (asuresh via tucu)
 468a4fc00921ea7bc61bb60666e9352b0ad3928b HADOOP-10842. CryptoExtension 
 generateEncryptedKey method should receive the key name. (asuresh via tucu)
 c6d60c6db8b22d6dc45e63073bc5bb52dc041a8c HADOOP-10750. KMSKeyProviderCache 
 should be in hadoop-common. (asuresh via tucu)
 c3eca9f2504ed619a3edcf3d3eafc286133911d0 HADOOP-10720. KMS: Implement 
 generateEncryptedKey and 

[jira] [Created] (HADOOP-10993) Dump java command line to *.out file

2014-08-21 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10993:
-

 Summary: Dump java command line to *.out file
 Key: HADOOP-10993
 URL: https://issues.apache.org/jira/browse/HADOOP-10993
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Priority: Minor


It might be a nice enhancement to print the contents of the java command line 
to the out file during daemon startup to help with debugging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10986) hadoop tarball is twice as big as prev. version and 6 times as big unpacked

2014-08-21 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-10986.
---

Resolution: Duplicate

 hadoop tarball is twice as big as prev. version and 6 times as big unpacked
 ---

 Key: HADOOP-10986
 URL: https://issues.apache.org/jira/browse/HADOOP-10986
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: André Kelpe
Assignee: Karthik Kambatla
Priority: Blocker

 I noticed that the binary tarball for 2.5.0 is almost 300MB, while 2.4.1 is 
 only 132MB. Unpacking the latest tarball gives me 1.8 GB of stuff, with the 
 majority in the share directory.
  
 {code}
 $ cd hadoop-2.4.1
 $ du -sh *
 364Kbin
 356Ketc
 100Kinclude
 2,3Mlib
 128Klibexec
 24K LICENSE.txt
 12K NOTICE.txt
 12K README.txt
 336Ksbin
 280Mshare
 {code}
 {code}
  $ cd hadoop-2.5.0 
  $ du -sh *
 512Kbin
 332Ketc
 100Kinclude
 4,6Mlib
 128Klibexec
 336Ksbin
 1,8Gshare
 {code}
 I also saw some warnings from tar while unpacking:
 {code}
 $ tar xf hadoop-2.5.0.tar.gz 
 tar: Ignoring unknown extended header keyword `SCHILY.dev'
 tar: Ignoring unknown extended header keyword `SCHILY.ino'
 tar: Ignoring unknown extended header keyword `SCHILY.nlink'
 tar: Ignoring unknown extended header keyword `SCHILY.dev'
 tar: Ignoring unknown extended header keyword `SCHILY.ino'
 tar: Ignoring unknown extended header keyword `SCHILY.nlink'
 tar: Ignoring unknown extended header keyword `SCHILY.dev'
 tar: Ignoring unknown extended header keyword `SCHILY.ino'
 tar: Ignoring unknown extended header keyword `SCHILY.nlink'
 tar: Ignoring unknown extended header keyword `SCHILY.dev'
 tar: Ignoring unknown extended header keyword `SCHILY.ino'
 tar: Ignoring unknown extended header keyword `SCHILY.nlink'
 tar: Ignoring unknown extended header keyword `SCHILY.dev'
 tar: Ignoring unknown extended header keyword `SCHILY.ino'
 tar: Ignoring unknown extended header keyword `SCHILY.nlink'
 tar: Ignoring unknown extended header keyword `SCHILY.dev'
 tar: Ignoring unknown extended header keyword `SCHILY.ino'
 tar: Ignoring unknown extended header keyword `SCHILY.nlink'
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10995) HBase cannot run correctly with Hadoop trunk

2014-08-21 Thread Zhijie Shen (JIRA)
Zhijie Shen created HADOOP-10995:


 Summary: HBase cannot run correctly with Hadoop trunk
 Key: HADOOP-10995
 URL: https://issues.apache.org/jira/browse/HADOOP-10995
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
Priority: Critical


Several incompatible changes that happened on trunk but not on branch-2 have 
broken the compatibility for HBbase:

HADOOP-10348
HADOOP-8124
HADOOP-10255

In general, HttpServer is and Syncable.sync have been missed.

It blocks YARN-2032, which makes timeline sever support HBase store.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10985) native client: split ndfs.c into meta, file, util, and permission

2014-08-21 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10985.
---

   Resolution: Fixed
Fix Version/s: HADOOP-10388

 native client: split ndfs.c into meta, file, util, and permission
 -

 Key: HADOOP-10985
 URL: https://issues.apache.org/jira/browse/HADOOP-10985
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: HADOOP-10388

 Attachments: HADOOP-10985.001.patch


 Split ndfs.c into meta.c, file.c, util.c, and permission.c.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10996) run hdfs, yarn, mapred, etc from build tree

2014-08-21 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10996:
-

 Summary: run hdfs, yarn, mapred, etc from build tree
 Key: HADOOP-10996
 URL: https://issues.apache.org/jira/browse/HADOOP-10996
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


There is a developer use case for running the shell scripts from the build 
tree.  What would it take to make it work?



--
This message was sent by Atlassian JIRA
(v6.2#6252)