[jira] [Created] (HADOOP-11407) Adding socket receive buffer size support in Client.java

2014-12-15 Thread Liang Xie (JIRA)
Liang Xie created HADOOP-11407:
--

 Summary: Adding socket receive buffer size support in Client.java
 Key: HADOOP-11407
 URL: https://issues.apache.org/jira/browse/HADOOP-11407
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 2.6.0
Reporter: Liang Xie
Assignee: Liang Xie


It would be good if the Client class having a socketReceiveBufferSize just like 
Server's socketSendBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #45

2014-12-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-common-trunk-Java8/45/changes

Changes:

[harsh] MAPREDUCE-6194. Bubble up final exception in failures during creation 
of output collectors. Contributed by Varun Saxena.

--
[...truncated 5188 lines...]
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.52 sec - in 
org.apache.hadoop.io.nativeio.TestNativeIO
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritable
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.294 sec - in 
org.apache.hadoop.io.TestWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestIOUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.346 sec - in 
org.apache.hadoop.io.TestIOUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.142 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMD5Hash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.io.TestMD5Hash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestMapFile
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.957 sec - in 
org.apache.hadoop.io.TestMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableName
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec - in 
org.apache.hadoop.io.TestWritableName
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSortedMapWritable
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.hadoop.io.TestSortedMapWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.646 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.18 sec - in 
org.apache.hadoop.io.retry.TestFailoverProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.183 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.319 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.188 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.131 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.333 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.139 sec - in 

Build failed in Jenkins: Hadoop-Common-trunk #1344

2014-12-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/1344/changes

Changes:

[harsh] MAPREDUCE-6194. Bubble up final exception in failures during creation 
of output collectors. Contributed by Varun Saxena.

--
[...truncated 4738 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.471 sec - in 
org.apache.hadoop.metrics2.sink.TestFileSink
Running org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.767 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
Running org.apache.hadoop.metrics2.impl.TestMetricsConfig
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsConfig
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.24 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.436 sec - in 
org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Running org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.386 sec - in 
org.apache.hadoop.metrics2.impl.TestGraphiteMetrics
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.485 sec - in 
org.apache.hadoop.metrics2.impl.TestSinkQueue
Running org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.444 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsSourceAdapter
Running org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.383 sec - in 
org.apache.hadoop.metrics2.impl.TestMetricsVisitor
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.467 sec - in 
org.apache.hadoop.metrics2.lib.TestMutableMetrics
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.387 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.454 sec - in 
org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.metrics2.lib.TestUniqNames
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in 
org.apache.hadoop.metrics2.lib.TestInterns
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.53 sec - in 
org.apache.hadoop.metrics2.source.TestJvmMetrics
Running org.apache.hadoop.metrics2.filter.TestPatternFilter
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.425 sec - in 
org.apache.hadoop.metrics2.filter.TestPatternFilter
Running org.apache.hadoop.conf.TestConfigurationSubclass
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.391 sec - in 
org.apache.hadoop.conf.TestConfigurationSubclass
Running org.apache.hadoop.conf.TestGetInstances
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.316 sec - in 
org.apache.hadoop.conf.TestGetInstances
Running org.apache.hadoop.conf.TestConfigurationDeprecation
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.235 sec - in 
org.apache.hadoop.conf.TestConfigurationDeprecation
Running org.apache.hadoop.conf.TestDeprecatedKeys
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.535 sec - in 
org.apache.hadoop.conf.TestDeprecatedKeys
Running org.apache.hadoop.conf.TestConfiguration
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.252 sec - in 
org.apache.hadoop.conf.TestConfiguration
Running org.apache.hadoop.conf.TestReconfiguration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.35 sec - in 
org.apache.hadoop.conf.TestReconfiguration
Running org.apache.hadoop.conf.TestConfServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.618 sec - in 
org.apache.hadoop.conf.TestConfServlet
Running org.apache.hadoop.test.TestMultithreadedTestUtil
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.184 sec - in 
org.apache.hadoop.test.TestMultithreadedTestUtil
Running org.apache.hadoop.test.TestTimedOutTestsListener
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 sec - in 
org.apache.hadoop.test.TestTimedOutTestsListener
Running org.apache.hadoop.metrics.TestMetricsServlet
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in 

Re: Solaris Port

2014-12-15 Thread Steve Loughran
On 14 December 2014 at 16:52, Allen Wittenauer a...@altiscale.com wrote:

 Well, slight correction:  only one thing in the code that has been
 replaced.  There are a two patches waiting to get reviewed and applied that
 fix the rest of the shipping shell code: HADOOP-10788 and HADOOP-11346.
 HDFS-7460 is waiting on HADOOP-10788 since I’m consolidating the tomcat
 driver code, but it will be effectively a clone of HADOOP-10788’s code.


looked @ these, and they are beyond my current knowledge of the shell
scripts, so I'm not in a position to review right now, sorry.



 I haven’t looked at all at things like test-patch or other stuff
 in dev-support, which I’m sure are all in an… uhh… “interesting” state.



there's been recent work on that script; it's working fairly well right now.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Contributing to Hadoop

2014-12-15 Thread Raghavendra Vaidya
Folks,

I want to contribute to Hadoop ... I have downloaded the hadoop source and
set up the same on Intellij on Mac ...

I would like to start by executing / writing unit test cases ... could some
one point me to some resources to how to do that ?


Regards

Raghavendra Vaidya


[jira] [Created] (HADOOP-11408) TestRetryCacheWithHA.testUpdatePipeline failed in trunk

2014-12-15 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HADOOP-11408:
--

 Summary: TestRetryCacheWithHA.testUpdatePipeline failed in trunk
 Key: HADOOP-11408
 URL: https://issues.apache.org/jira/browse/HADOOP-11408
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yongjun Zhang


https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/

Error Message
{quote}
After waiting the operation updatePipeline still has not taken effect on NN yet
Stacktrace

java.lang.AssertionError: After waiting the operation updatePipeline still has 
not taken effect on NN yet
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
{quote}

Found by tool proposed in HADOOP-11045:

{quote}
[yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
Hadoop-Hdfs-trunk -n 28 | tee bt.log
Recently FAILED builds in url: 
https://builds.apache.org//job/Hadoop-Hdfs-trunk
THERE ARE 4 builds (out of 6) that have failed tests in the past 28 days, 
as listed below:

===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport (2014-12-15 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport (2014-12-13 
10:32:27)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport (2014-12-13 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport (2014-12-11 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization

Among 6 runs examined, all failed tests #failedRuns: testName:
3: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
2: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
1: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11408) TestRetryCacheWithHA.testUpdatePipeline failed in trunk

2014-12-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HADOOP-11408.

Resolution: Duplicate

 TestRetryCacheWithHA.testUpdatePipeline failed in trunk
 ---

 Key: HADOOP-11408
 URL: https://issues.apache.org/jira/browse/HADOOP-11408
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yongjun Zhang

 https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/
 Error Message
 {quote}
 After waiting the operation updatePipeline still has not taken effect on NN 
 yet
 Stacktrace
 java.lang.AssertionError: After waiting the operation updatePipeline still 
 has not taken effect on NN yet
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
   at 
 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
 {quote}
 Found by tool proposed in HADOOP-11045:
 {quote}
 [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
 Hadoop-Hdfs-trunk -n 5 | tee bt.log
 Recently FAILED builds in url: 
 https://builds.apache.org//job/Hadoop-Hdfs-trunk
 THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport 
 (2014-12-15 03:30:01)
 Failed test: 
 org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
 Failed test: 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport 
 (2014-12-13 10:32:27)
 Failed test: 
 org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport 
 (2014-12-13 03:30:01)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport 
 (2014-12-11 03:30:01)
 Failed test: 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
 Among 6 runs examined, all failed tests #failedRuns: testName:
 3: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
 2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
 2: 
 org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
 1: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Contributing to Hadoop

2014-12-15 Thread prem vishnoi
I want to work on hadoop live project for 2 hr every day
please help me

Warms Regards,
Prema Vishnoi

“Try not to become a man of success but rather to become a man of value”

On Mon, Dec 15, 2014 at 8:05 PM, Raghavendra Vaidya 
raghavendra.vai...@gmail.com wrote:

  [image: Boxbe] https://www.boxbe.com/overview This message is eligible
 for Automatic Cleanup! (raghavendra.vai...@gmail.com) Add cleanup rule
 https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Ftoken%3DhE%252BFd5KBscSKk4I8ozo4iIavsHjSD6YoVu2VmKP85gzBLDPd2zoNxucV7139M7VQHfDCGRxsHD7ApSe6kvAg8lZ5IDZA0VtktQcckOTihYtgKFk8yJapKVkUAq%252FALl2gBeAmUadvX73vGADtU29wsA%253D%253D%26key%3DpwSDg5G8rAF1g8Xj%252BK31cKIwK%252FEh7aKepLnDRsMDSKQ%253Dtc_serial=19701652272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001
 | More info
 http://blog.boxbe.com/general/boxbe-automatic-cleanup?tc_serial=19701652272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001

 Folks,

 I want to contribute to Hadoop ... I have downloaded the hadoop source and
 set up the same on Intellij on Mac ...

 I would like to start by executing / writing unit test cases ... could some
 one point me to some resources to how to do that ?


 Regards

 Raghavendra Vaidya




Re: Contributing to Hadoop

2014-12-15 Thread Rich Haase
I want to preface this response by saying that I have not contributed code
to Hadoop.

However, I have been interested in doing so and I can share some of my
research with those of you who are also interested in contributing.

First of all, check out these links for information on how to get a
development environment setup and general information about how to
contribute:

http://apache.org/foundation/getinvolved.html
http://wiki.apache.org/hadoop/HowToContribute
http://wiki.apache.org/hadoop/GitAndHadoop

Next, the developers contributing to Hadoop are very busy people who are
often doing work in their free time.  SO general questions like: ³How can
I contribute?² are difficult for them to answer every time they come up.

Once you have a development environment setup and you can successfully
compile/run tests (this is not a small feat) you have a couple of routes
to take for contributing:

1.  Contribute code you have written that enhances the software in someway
that is useful to you, i.e. ³scratching an itch².

2.  Look for unassigned issues in JIRA (issues.apache.org/jira/).  In the
Hadoop projects good started JIRAs are often labeled ³newbie² or
³newbie++².  These may be a good place to look at contributing code.

3.  You can help the projects by writing documentation on the wiki, or
responding to questions on the mailing lists (personally, I enjoy doing
this where I can). 

I hope these suggestions will help you find someways in which you can
contribute to the Hadoop ecosystem.  Keep in mind that writing
documentation and helping others can be as beneficial to the community as
writing code. 

Cheers,

Rich

Rich Haase | Sr. Software Engineer | Pandora
m 303.887.1146 | rha...@pandora.com




On 12/15/14, 10:30 AM, prem vishnoi vishnoip...@gmail.com wrote:

I want to work on hadoop live project for 2 hr every day
please help me

Warms Regards,
Prema Vishnoi

³Try not to become a man of success but rather to become a man of value²

On Mon, Dec 15, 2014 at 8:05 PM, Raghavendra Vaidya 
raghavendra.vai...@gmail.com wrote:

  [image: Boxbe] https://www.boxbe.com/overview This message is
eligible
 for Automatic Cleanup! (raghavendra.vai...@gmail.com) Add cleanup rule
 
https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3F
token%3DhE%252BFd5KBscSKk4I8ozo4iIavsHjSD6YoVu2VmKP85gzBLDPd2zoNxucV7139M
7VQHfDCGRxsHD7ApSe6kvAg8lZ5IDZA0VtktQcckOTihYtgKFk8yJapKVkUAq%252FALl2gBe
AmUadvX73vGADtU29wsA%253D%253D%26key%3DpwSDg5G8rAF1g8Xj%252BK31cKIwK%252F
Eh7aKepLnDRsMDSKQ%253Dtc_serial=19701652272tc_rand=989355880utm_source
=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001
 | More info
 
http://blog.boxbe.com/general/boxbe-automatic-cleanup?tc_serial=19701652
272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_C
LEANUP_ADDutm_content=001

 Folks,

 I want to contribute to Hadoop ... I have downloaded the hadoop source
and
 set up the same on Intellij on Mac ...

 I would like to start by executing / writing unit test cases ... could
some
 one point me to some resources to how to do that ?


 Regards

 Raghavendra Vaidya





Re: Contributing to Hadoop

2014-12-15 Thread Jay Vyas
One easy place to contribute in small increments could be the reproducing of 
bugs in jiras that are filed and open.  

If every day you spent an hour reproducing a bug filed in a jira, you could 
come up to speed eventually on a lot of sharp corners of the source code, and 
probably contribute some value to the community as well.

 On Dec 15, 2014, at 12:30 PM, prem vishnoi vishnoip...@gmail.com wrote:
 
 I want to work on hadoop live project for 2 hr every day
 please help me
 
 Warms Regards,
 Prema Vishnoi
 
 “Try not to become a man of success but rather to become a man of value”
 
 On Mon, Dec 15, 2014 at 8:05 PM, Raghavendra Vaidya 
 raghavendra.vai...@gmail.com wrote:
 
 [image: Boxbe] https://www.boxbe.com/overview This message is eligible
 for Automatic Cleanup! (raghavendra.vai...@gmail.com) Add cleanup rule
 https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Ftoken%3DhE%252BFd5KBscSKk4I8ozo4iIavsHjSD6YoVu2VmKP85gzBLDPd2zoNxucV7139M7VQHfDCGRxsHD7ApSe6kvAg8lZ5IDZA0VtktQcckOTihYtgKFk8yJapKVkUAq%252FALl2gBeAmUadvX73vGADtU29wsA%253D%253D%26key%3DpwSDg5G8rAF1g8Xj%252BK31cKIwK%252FEh7aKepLnDRsMDSKQ%253Dtc_serial=19701652272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001
 | More info
 http://blog.boxbe.com/general/boxbe-automatic-cleanup?tc_serial=19701652272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001
 
 Folks,
 
 I want to contribute to Hadoop ... I have downloaded the hadoop source and
 set up the same on Intellij on Mac ...
 
 I would like to start by executing / writing unit test cases ... could some
 one point me to some resources to how to do that ?
 
 
 Regards
 
 Raghavendra Vaidya
 
 


[jira] [Resolved] (HADOOP-7852) consolidate templates

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7852.
--
Resolution: Won't Fix

Templates and configuration tool have been removed from trunk. Closing as won't 
fix.

 consolidate templates
 -

 Key: HADOOP-7852
 URL: https://issues.apache.org/jira/browse/HADOOP-7852
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.23.0
Reporter: Joe Crobak
Priority: Minor

 the hadoop-common project has templates for hdfs-site.xml and mapred-site.xml 
 that are used by the config generator scripts.  The 
 hadoop-{mapreduce,hdfs}-project's also have {mapred,hdfs}-site.xml templates, 
 and the templates don't match. It would be good if these could be 
 consolidated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8505) hadoop scripts to support user native lib dirs

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8505.
--
Resolution: Implemented

Trunk/3.x supports appending to JAVA_LIBRARY_PATH. Closing as implemented.

 hadoop scripts to support user native lib dirs
 --

 Key: HADOOP-8505
 URL: https://issues.apache.org/jira/browse/HADOOP-8505
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Minor

 you can set up a custom classpath with bin/hadoop through the 
 HADOOP_CLASSPATH env, but there is no equivalent for the native libraries 
 -the only way to get them picked up is to drop them into lib/native/${arch}/ 
 , which impacts everything.
 Having some HADOOP_NATIVE_LIB_PATH env variable would let people add new 
 native binaries to Hadoop commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: a friendly suggestion for developers when uploading patches

2014-12-15 Thread Konstantin Shvachko
Guys,

I agree that revision numbers are useful if you need to reference a
particular attachment. As well as with all your other arguments.
My general point is that the infrastructure we use should be convenient for
the users to do such simple things automatically. Rather than us
introducing rules to overcome certain shortcomings of the tool. I think if
the Attachments list was
1. ordered by date rather than by name, and
2. enumerated, like subtasks are
then it would have solved the issue discussed here.

I did communicate changing the default ordering for attachments with INFRA
some time ago. Don't remember if I created a jira. Should we open one now?

Thanks,
--Konst

On Sun, Dec 14, 2014 at 6:44 AM, Steve Loughran ste...@hortonworks.com
wrote:

 a couple more benefits

 1. when you post a patch you can add a comment like patch 003 killed NPE
 in auth, and the comment history then integrations with the revisions. You
 can also do this in your private git repository, so correlate commits there
 with patch versions.

 2. they list in creation order in a directory.

 #2 matters for me as when I create patches I stick them in a dir specific
 to that JIRA; I can work out what the highest number is and increment it by
 one for creating a new one...yet retain the whole patch history locally.

 I also download external patches to review  apply to an incoming/ dir;
 numbering helps me manage that  to verify that I really am applying the
 relevant patch.

 Doesn't mean we should change the order though. I don't think that is
 something you can do on a per-project basis, so take it to infrastructure@


 On 14 December 2014 at 01:33, Yongjun Zhang yzh...@cloudera.com wrote:

  Hi Konst,
 
  Thanks for the good suggestion, certainly that would help.
 
  Here are the advantages to include revision number in the patch name:
 
 - we would have the same ordering by name or by date
 - it would be easier to refer to individual patch, say, when we need
 to
 refer to multiple patches when making a comment (e.g,, comparing revX
  with
 revY, here are the pros and cons ...).
 - when we create a new rev patch file before submitting, if we use the
 same name as previous one, it would overwrite the previous one
 - when we download patch files to the same directory, depending on the
 order of downloading, the patches would possibly not appear in the
 order
 that they were submitted.
 
  Best regards,
 
  --Yongjun
 
  On Sat, Dec 13, 2014 at 10:54 AM, Konstantin Shvachko 
  shv.had...@gmail.com
  wrote:
  
   Hello guys,
  
   The problem here is not in a patch naming conventions, but in the jira
   default ordering schema for attachments.
   Mentioned it on several occasions. Currently attachments use sort by
  name
   sorting as the default. And it should be changed to sort by date.
 Then
   you don't need any naming conventions to adjust to current sorting
   settings. You just see them in the order submitted and choose the last
  for
   a review or a commit.
  
   Does anybody have permissions  skills to change the default order type
  for
   attachments in the Jira?
  
   Thanks,
   --Konst
  
   On Thu, Dec 4, 2014 at 10:18 AM, Tsuyoshi OZAWA 
  ozawa.tsuyo...@gmail.com
   wrote:
   
Thanks Yongjun and Harsh for updating Wiki!
   
Thanks,
- Tsuyoshi
   
On Thu, Dec 4, 2014 at 9:43 AM, Yongjun Zhang yzh...@cloudera.com
   wrote:
 Thanks Harsh, I just made a change in

 https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch

 based on the discussion in this thread.

 --Yongjun

 On Wed, Dec 3, 2014 at 2:20 PM, Harsh J ha...@cloudera.com
 wrote:

 I've added you in as YongjunZhang. Please let me know if you are
  still
 unable to edit after a relogin.

 On Wed, Dec 3, 2014 at 1:43 AM, Yongjun Zhang 
 yzh...@cloudera.com
wrote:
  Thanks Allen, Andrew and Tsuyoshi.
 
  My wiki user name is YongjunZhang, I will appreciate it very
 much
  if
  someone can give me the permission to edit the wiki pages.
 Thanks.
 
  --Yongjun
 
  On Tue, Dec 2, 2014 at 11:04 AM, Andrew Wang 
andrew.w...@cloudera.com
  wrote:
 
  I just updated the wiki to say that the version number format
 is
 preferred.
  Yongjun, if you email out your wiki username, someone (?) can
  give
you
  privs.
 
  On Tue, Dec 2, 2014 at 10:16 AM, Allen Wittenauer 
   a...@altiscale.com
  wrote:
 
   I think people forget we have a wiki that documents this and
   other
 things
   ...
  
  
  https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch
  
   On Dec 2, 2014, at 10:01 AM, Tsuyoshi OZAWA 
ozawa.tsuyo...@gmail.com
 
   wrote:
  
jiraNameId.[branchName.]revisionNum.patch*
   
+1 for this format. Thanks for starting the discussion,
   Yongjun.
   
- Tsuyoshi
   
On Tue, Dec 

Re: a friendly suggestion for developers when uploading patches

2014-12-15 Thread Andrew Wang
I'm all for changing the default sort order, but it doesn't address the
point that Steve and I brought up about local downloads.

If you want to push on the INFRA JIRA though, please feel free. I'm +1 for
that.

Best,
Andrew

On Mon, Dec 15, 2014 at 11:40 AM, Konstantin Shvachko shv.had...@gmail.com
wrote:

 Guys,

 I agree that revision numbers are useful if you need to reference a
 particular attachment. As well as with all your other arguments.
 My general point is that the infrastructure we use should be convenient for
 the users to do such simple things automatically. Rather than us
 introducing rules to overcome certain shortcomings of the tool. I think if
 the Attachments list was
 1. ordered by date rather than by name, and
 2. enumerated, like subtasks are
 then it would have solved the issue discussed here.

 I did communicate changing the default ordering for attachments with INFRA
 some time ago. Don't remember if I created a jira. Should we open one now?

 Thanks,
 --Konst

 On Sun, Dec 14, 2014 at 6:44 AM, Steve Loughran ste...@hortonworks.com
 wrote:
 
  a couple more benefits
 
  1. when you post a patch you can add a comment like patch 003 killed NPE
  in auth, and the comment history then integrations with the revisions.
 You
  can also do this in your private git repository, so correlate commits
 there
  with patch versions.
 
  2. they list in creation order in a directory.
 
  #2 matters for me as when I create patches I stick them in a dir specific
  to that JIRA; I can work out what the highest number is and increment it
 by
  one for creating a new one...yet retain the whole patch history locally.
 
  I also download external patches to review  apply to an incoming/ dir;
  numbering helps me manage that  to verify that I really am applying the
  relevant patch.
 
  Doesn't mean we should change the order though. I don't think that is
  something you can do on a per-project basis, so take it to
 infrastructure@
 
 
  On 14 December 2014 at 01:33, Yongjun Zhang yzh...@cloudera.com wrote:
 
   Hi Konst,
  
   Thanks for the good suggestion, certainly that would help.
  
   Here are the advantages to include revision number in the patch name:
  
  - we would have the same ordering by name or by date
  - it would be easier to refer to individual patch, say, when we need
  to
  refer to multiple patches when making a comment (e.g,, comparing
 revX
   with
  revY, here are the pros and cons ...).
  - when we create a new rev patch file before submitting, if we use
 the
  same name as previous one, it would overwrite the previous one
  - when we download patch files to the same directory, depending on
 the
  order of downloading, the patches would possibly not appear in the
  order
  that they were submitted.
  
   Best regards,
  
   --Yongjun
  
   On Sat, Dec 13, 2014 at 10:54 AM, Konstantin Shvachko 
   shv.had...@gmail.com
   wrote:
   
Hello guys,
   
The problem here is not in a patch naming conventions, but in the
 jira
default ordering schema for attachments.
Mentioned it on several occasions. Currently attachments use sort by
   name
sorting as the default. And it should be changed to sort by date.
  Then
you don't need any naming conventions to adjust to current sorting
settings. You just see them in the order submitted and choose the
 last
   for
a review or a commit.
   
Does anybody have permissions  skills to change the default order
 type
   for
attachments in the Jira?
   
Thanks,
--Konst
   
On Thu, Dec 4, 2014 at 10:18 AM, Tsuyoshi OZAWA 
   ozawa.tsuyo...@gmail.com
wrote:

 Thanks Yongjun and Harsh for updating Wiki!

 Thanks,
 - Tsuyoshi

 On Thu, Dec 4, 2014 at 9:43 AM, Yongjun Zhang yzh...@cloudera.com
 
wrote:
  Thanks Harsh, I just made a change in
 
  https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch
 
  based on the discussion in this thread.
 
  --Yongjun
 
  On Wed, Dec 3, 2014 at 2:20 PM, Harsh J ha...@cloudera.com
  wrote:
 
  I've added you in as YongjunZhang. Please let me know if you are
   still
  unable to edit after a relogin.
 
  On Wed, Dec 3, 2014 at 1:43 AM, Yongjun Zhang 
  yzh...@cloudera.com
 wrote:
   Thanks Allen, Andrew and Tsuyoshi.
  
   My wiki user name is YongjunZhang, I will appreciate it very
  much
   if
   someone can give me the permission to edit the wiki pages.
  Thanks.
  
   --Yongjun
  
   On Tue, Dec 2, 2014 at 11:04 AM, Andrew Wang 
 andrew.w...@cloudera.com
   wrote:
  
   I just updated the wiki to say that the version number format
  is
  preferred.
   Yongjun, if you email out your wiki username, someone (?) can
   give
 you
   privs.
  
   On Tue, Dec 2, 2014 at 10:16 AM, Allen Wittenauer 
a...@altiscale.com
   wrote:
  
I think people 

[jira] [Created] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11409:
---

 Summary: FileContext.getFileContext can stack overflow if default 
fs misconfigured
 Key: HADOOP-11409
 URL: https://issues.apache.org/jira/browse/HADOOP-11409
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe


If the default filesystem is misconfigured such that it doesn't have a scheme 
then FileContext.getFileContext(URI, Configuration) will call 
FileContext.getFileContext(Configuration) which in turn calls the former and we 
loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11410:
-

 Summary: make the rpath of libhadoop.so configurable 
 Key: HADOOP-11410
 URL: https://issues.apache.org/jira/browse/HADOOP-11410
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We should make the rpath of {{libhadoop.so}} configurable, so that we can use a 
different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily used 
to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Contributing to Hadoop

2014-12-15 Thread Ravi Prakash
Please also go through BUILDING.txt in the code base to find out how to build 
the source code. A very good area to start off learning about Hadoop and 
helping the community is to fix a failing unit test case. 
 

 On Monday, December 15, 2014 10:02 AM, Jay Vyas 
jayunit100.apa...@gmail.com wrote:
   

 One easy place to contribute in small increments could be the reproducing of 
bugs in jiras that are filed and open.  

If every day you spent an hour reproducing a bug filed in a jira, you could 
come up to speed eventually on a lot of sharp corners of the source code, and 
probably contribute some value to the community as well.

 On Dec 15, 2014, at 12:30 PM, prem vishnoi vishnoip...@gmail.com wrote:
 
 I want to work on hadoop live project for 2 hr every day
 please help me
 
 Warms Regards,
 Prema Vishnoi
 
 “Try not to become a man of success but rather to become a man of value”
 
 On Mon, Dec 15, 2014 at 8:05 PM, Raghavendra Vaidya 
 raghavendra.vai...@gmail.com wrote:
 
 [image: Boxbe] https://www.boxbe.com/overview This message is eligible
 for Automatic Cleanup! (raghavendra.vai...@gmail.com) Add cleanup rule
 https://www.boxbe.com/popup?url=https%3A%2F%2Fwww.boxbe.com%2Fcleanup%3Ftoken%3DhE%252BFd5KBscSKk4I8ozo4iIavsHjSD6YoVu2VmKP85gzBLDPd2zoNxucV7139M7VQHfDCGRxsHD7ApSe6kvAg8lZ5IDZA0VtktQcckOTihYtgKFk8yJapKVkUAq%252FALl2gBeAmUadvX73vGADtU29wsA%253D%253D%26key%3DpwSDg5G8rAF1g8Xj%252BK31cKIwK%252FEh7aKepLnDRsMDSKQ%253Dtc_serial=19701652272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001
 | More info
 http://blog.boxbe.com/general/boxbe-automatic-cleanup?tc_serial=19701652272tc_rand=989355880utm_source=stfutm_medium=emailutm_campaign=ANNO_CLEANUP_ADDutm_content=001
 
 Folks,
 
 I want to contribute to Hadoop ... I have downloaded the hadoop source and
 set up the same on Intellij on Mac ...
 
 I would like to start by executing / writing unit test cases ... could some
 one point me to some resources to how to do that ?
 
 
 Regards
 
 Raghavendra Vaidya
 
 

   

[jira] [Created] (HADOOP-11411) Hive build failure on hadoop-2.7 due to HADOOP-11356

2014-12-15 Thread Jason Dere (JIRA)
Jason Dere created HADOOP-11411:
---

 Summary: Hive build failure on hadoop-2.7 due to HADOOP-11356
 Key: HADOOP-11411
 URL: https://issues.apache.org/jira/browse/HADOOP-11411
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Dere


HADOOP-11356 removes org.apache.hadoop.fs.permission.AccessControlException, 
causing build break on Hive when compiling against hadoop-2.7:

{noformat}
shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:[808,63]
 cannot find symbol
  symbol:   class AccessControlException
  location: package org.apache.hadoop.fs.permission
[INFO] 1 error
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11411) Hive build failure on hadoop-2.7 due to HADOOP-11356

2014-12-15 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere resolved HADOOP-11411.
-
  Resolution: Duplicate
Release Note: Opened Hive Jira at HIVE-9115

 Hive build failure on hadoop-2.7 due to HADOOP-11356
 

 Key: HADOOP-11411
 URL: https://issues.apache.org/jira/browse/HADOOP-11411
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Dere

 HADOOP-11356 removes org.apache.hadoop.fs.permission.AccessControlException, 
 causing build break on Hive when compiling against hadoop-2.7:
 {noformat}
 shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:[808,63]
  cannot find symbol
   symbol:   class AccessControlException
   location: package org.apache.hadoop.fs.permission
 [INFO] 1 error
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: a friendly suggestion for developers when uploading patches

2014-12-15 Thread Konstantin Shvachko
Did some research on changing the default order of attachments.
It is not a configuration or INFRA issue.
Turned out to be a controversial topic in the Jira itself, which was
explicitly rejected by the developers. With many users unsatisfied.

https://jira.atlassian.com/browse/JRA-28290

I thought it should be a simple thing to fix...
Oh well. Revision numbers is the way to go then for now.

Thanks,
--Konst

On Mon, Dec 15, 2014 at 11:55 AM, Andrew Wang andrew.w...@cloudera.com
wrote:

 I'm all for changing the default sort order, but it doesn't address the
 point that Steve and I brought up about local downloads.

 If you want to push on the INFRA JIRA though, please feel free. I'm +1 for
 that.

 Best,
 Andrew

 On Mon, Dec 15, 2014 at 11:40 AM, Konstantin Shvachko 
 shv.had...@gmail.com
 wrote:
 
  Guys,
 
  I agree that revision numbers are useful if you need to reference a
  particular attachment. As well as with all your other arguments.
  My general point is that the infrastructure we use should be convenient
 for
  the users to do such simple things automatically. Rather than us
  introducing rules to overcome certain shortcomings of the tool. I think
 if
  the Attachments list was
  1. ordered by date rather than by name, and
  2. enumerated, like subtasks are
  then it would have solved the issue discussed here.
 
  I did communicate changing the default ordering for attachments with
 INFRA
  some time ago. Don't remember if I created a jira. Should we open one
 now?
 
  Thanks,
  --Konst
 
  On Sun, Dec 14, 2014 at 6:44 AM, Steve Loughran ste...@hortonworks.com
  wrote:
  
   a couple more benefits
  
   1. when you post a patch you can add a comment like patch 003 killed
 NPE
   in auth, and the comment history then integrations with the revisions.
  You
   can also do this in your private git repository, so correlate commits
  there
   with patch versions.
  
   2. they list in creation order in a directory.
  
   #2 matters for me as when I create patches I stick them in a dir
 specific
   to that JIRA; I can work out what the highest number is and increment
 it
  by
   one for creating a new one...yet retain the whole patch history
 locally.
  
   I also download external patches to review  apply to an incoming/ dir;
   numbering helps me manage that  to verify that I really am applying
 the
   relevant patch.
  
   Doesn't mean we should change the order though. I don't think that is
   something you can do on a per-project basis, so take it to
  infrastructure@
  
  
   On 14 December 2014 at 01:33, Yongjun Zhang yzh...@cloudera.com
 wrote:
  
Hi Konst,
   
Thanks for the good suggestion, certainly that would help.
   
Here are the advantages to include revision number in the patch name:
   
   - we would have the same ordering by name or by date
   - it would be easier to refer to individual patch, say, when we
 need
   to
   refer to multiple patches when making a comment (e.g,, comparing
  revX
with
   revY, here are the pros and cons ...).
   - when we create a new rev patch file before submitting, if we use
  the
   same name as previous one, it would overwrite the previous one
   - when we download patch files to the same directory, depending on
  the
   order of downloading, the patches would possibly not appear in the
   order
   that they were submitted.
   
Best regards,
   
--Yongjun
   
On Sat, Dec 13, 2014 at 10:54 AM, Konstantin Shvachko 
shv.had...@gmail.com
wrote:

 Hello guys,

 The problem here is not in a patch naming conventions, but in the
  jira
 default ordering schema for attachments.
 Mentioned it on several occasions. Currently attachments use sort
 by
name
 sorting as the default. And it should be changed to sort by date.
   Then
 you don't need any naming conventions to adjust to current
 sorting
 settings. You just see them in the order submitted and choose the
  last
for
 a review or a commit.

 Does anybody have permissions  skills to change the default order
  type
for
 attachments in the Jira?

 Thanks,
 --Konst

 On Thu, Dec 4, 2014 at 10:18 AM, Tsuyoshi OZAWA 
ozawa.tsuyo...@gmail.com
 wrote:
 
  Thanks Yongjun and Harsh for updating Wiki!
 
  Thanks,
  - Tsuyoshi
 
  On Thu, Dec 4, 2014 at 9:43 AM, Yongjun Zhang 
 yzh...@cloudera.com
  
 wrote:
   Thanks Harsh, I just made a change in
  
  
 https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch
  
   based on the discussion in this thread.
  
   --Yongjun
  
   On Wed, Dec 3, 2014 at 2:20 PM, Harsh J ha...@cloudera.com
   wrote:
  
   I've added you in as YongjunZhang. Please let me know if you
 are
still
   unable to edit after a relogin.
  
   On Wed, Dec 3, 2014 at 1:43 AM, Yongjun Zhang 
   yzh...@cloudera.com
  wrote:
Thanks 

Re: Solaris Port SOLVED!

2014-12-15 Thread Colin McCabe
Thanks, Malcom.  I reviewed it.  The only thing you still have to do
is hit submit patch to get a Jenkins run.  See our HowToContribute
wiki page for more details.

wiki.apache.org/hadoop/HowToContribute

best,
Colin

On Sat, Dec 13, 2014 at 9:22 PM, malcolm malcolm.kaval...@oracle.com wrote:
 I am checking on the latest release of Solaris 11 and yes, it is still
 thread safe (or MT Safe as documented on the man page).

 strerror checks the error code, and returns the same unknown error string
 as terror does, if it receives an invalid code. I checked this on Windows,
 Solaris and Linux (though my changes only affect Solaris platforms).

 JIRA newbie question:

 I have filed the JIRA attaching the patch  HADOOP-11403 against the trunk,
 asking for reviewers in the comments section.
 Is there any other protocol I should follow ?

 Thanks,
 Malcolm


 On 12/14/2014 01:08 AM, Asokan, M wrote:

 Malcom,
 That's great! Is strerror() thread-safe in the recent version of
 Solaris?  In any case, to be correct you still need to make sure that the
 code passed to strerror() is a valid one.  For this you need to check errno
 after the call to strerror().  Please check the code snippet I sent earlier
 for HPUX.

 -- Asokan
 
 From: malcolm [malcolm.kaval...@oracle.com]
 Sent: Saturday, December 13, 2014 3:13 PM
 To: common-dev@hadoop.apache.org
 Subject: Re: Solaris Port SOLVED!

 Wiping egg off face  ...

 After consulting with the Solaris team (and looking at the source code
 and man page) ,  it turns out that strerror itself on Solaris is MT-Safe
 ! (Just like HPUX)

 So, after all this effort, all I need to do is modify terror as follows:

  const char* terror(int errnum)
  {

  #if defined(__sun)
 return strerror(errnum); //  MT-Safe under Solaris
  #else
 if ((errnum  0) || (errnum = sys_nerr)) {
   return unknown error.;
 }
 return sys_errlist[errnum];
  #endif
  }

 And in two other files where sys_errlist is referenced directly
 (NativeIO and hdfs_http_client.c), I replaced this direct access instead
 with a call to terror.

 Thanks for all your help and patience,

 I'll file a JIRA asap,

 Cheers,
 Malcolm

 On 12/13/2014 05:26 PM, malcolm wrote:

 Thanks Asokan,

 Looked up Gcc's thread local variables, seems a bit complex though and
 quite specific to Gnu.

 Intialization of the static errlist array should be thread safe i.e.
 initially the array is nulled out, and afterwards if two threads write
 to the same address, then they would be writing the same string.

 But if we are ok with changing 5 files, not just terror, then I would
 just remove terror completely and use strerror_r (or the alternatives
 for Windows and HP_UX) in the caller code instead i.e. using your
 suggestion for a local buffer in the caller, wherever needed. The more
 I think about it, the more this seems to be the right thing to do.

 Cheers,
 Malcolm


 On 12/13/2014 04:38 PM, Asokan, M wrote:

 Malcom,
  Gcc supports thread-local variables. See

 https://gcc.gnu.org/onlinedocs/gcc-3.3.1/gcc/Thread-Local.html

 I am not sure about native compilers on Solaris, HPUX, or AIX.

 In any case, I found out that the Windows native code in Hadoop seems
 to handle error messages properly. Here is what I found:

 $ find ~/work/hadoop/hadoop-trunk/ -name '*.c'|xargs grephadoop how to
 file a jira

 FormatMessage|awk -F: '{print $1}'|sort -u

 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c


 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMappingWin.c


 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c



 $ find ~/work/hadoop/hadoop-trunk/ -name '*.c'|xargs grep terror|awk
 -F: '{print $1}'|sort -u

 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/exception.c


 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/SharedFileDescriptorFactory.c


 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c


 /home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c



 This means you need not worry about the Windows version of terror().
 You need to change five files that contain UNIX specific native code.

 I have a question on your suggested implementation:

 How do you initialize the static errlist array in a thread-safe manner?

 
 Here is 

[jira] [Created] (HADOOP-11412) POMs mention The Apache Software License rather than Apache License

2014-12-15 Thread JIRA
Hervé Boutemy created HADOOP-11412:
--

 Summary: POMs mention The Apache Software License rather than 
Apache License
 Key: HADOOP-11412
 URL: https://issues.apache.org/jira/browse/HADOOP-11412
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Hervé Boutemy
Priority: Trivial


like JAMES-821 or RAT-128 or MPOM-48



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


MRAppMaster running on a docker container failed to load openssl cipher.

2014-12-15 Thread Chen He
Try to run teragen based on hadoop 2.6.0 using docker and met following
error:

2014-12-15 04:15:21,385 DEBUG [main]
org.apache.hadoop.crypto.OpensslCipher: Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
at
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method)
at
org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:84)
at
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
at
org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
at
org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
at org.apache.hadoop.fs.Hdfs.init(Hdfs.java:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:129)
at
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
at
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at
org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
at
org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:448)
at
org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:470)
at
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)
at
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.ensurePathInDefaultFileSystem(JobHistoryUtils.java:277)
at
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getConfiguredHistoryStagingDirPrefix(JobHistoryUtils.java:191)
at
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.serviceInit(JobHistoryEventHandler.java:147)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:444)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1499)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1496)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1429)
2014-12-15 04:15:21,390 DEBUG [main]
org.apache.hadoop.util.PerformanceAdvisory: Crypto codec
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.


Regards!

Chen


Re: Solaris Port SOLVED!

2014-12-15 Thread malcolm

Done, and added the comment as you requested.
I attached a second patch file to the JIRA (with .002 appended as per 
convention) assuming Jenkins knows to take the latest version, since I 
understand that I cannot remove the previous patch file .


On 12/16/2014 04:12 AM, Colin McCabe wrote:

Thanks, Malcom.  I reviewed it.  The only thing you still have to do
is hit submit patch to get a Jenkins run.  See our HowToContribute
wiki page for more details.

wiki.apache.org/hadoop/HowToContribute

best,
Colin

On Sat, Dec 13, 2014 at 9:22 PM, malcolm malcolm.kaval...@oracle.com wrote:

I am checking on the latest release of Solaris 11 and yes, it is still
thread safe (or MT Safe as documented on the man page).

strerror checks the error code, and returns the same unknown error string
as terror does, if it receives an invalid code. I checked this on Windows,
Solaris and Linux (though my changes only affect Solaris platforms).

JIRA newbie question:

I have filed the JIRA attaching the patch  HADOOP-11403 against the trunk,
asking for reviewers in the comments section.
Is there any other protocol I should follow ?

Thanks,
Malcolm


On 12/14/2014 01:08 AM, Asokan, M wrote:

Malcom,
 That's great! Is strerror() thread-safe in the recent version of
Solaris?  In any case, to be correct you still need to make sure that the
code passed to strerror() is a valid one.  For this you need to check errno
after the call to strerror().  Please check the code snippet I sent earlier
for HPUX.

-- Asokan

From: malcolm [malcolm.kaval...@oracle.com]
Sent: Saturday, December 13, 2014 3:13 PM
To: common-dev@hadoop.apache.org
Subject: Re: Solaris Port SOLVED!

Wiping egg off face  ...

After consulting with the Solaris team (and looking at the source code
and man page) ,  it turns out that strerror itself on Solaris is MT-Safe
! (Just like HPUX)

So, after all this effort, all I need to do is modify terror as follows:

  const char* terror(int errnum)
  {

  #if defined(__sun)
 return strerror(errnum); //  MT-Safe under Solaris
  #else
 if ((errnum  0) || (errnum = sys_nerr)) {
   return unknown error.;
 }
 return sys_errlist[errnum];
  #endif
  }

And in two other files where sys_errlist is referenced directly
(NativeIO and hdfs_http_client.c), I replaced this direct access instead
with a call to terror.

Thanks for all your help and patience,

I'll file a JIRA asap,

Cheers,
Malcolm

On 12/13/2014 05:26 PM, malcolm wrote:

Thanks Asokan,

Looked up Gcc's thread local variables, seems a bit complex though and
quite specific to Gnu.

Intialization of the static errlist array should be thread safe i.e.
initially the array is nulled out, and afterwards if two threads write
to the same address, then they would be writing the same string.

But if we are ok with changing 5 files, not just terror, then I would
just remove terror completely and use strerror_r (or the alternatives
for Windows and HP_UX) in the caller code instead i.e. using your
suggestion for a local buffer in the caller, wherever needed. The more
I think about it, the more this seems to be the right thing to do.

Cheers,
Malcolm


On 12/13/2014 04:38 PM, Asokan, M wrote:

Malcom,
  Gcc supports thread-local variables. See

https://gcc.gnu.org/onlinedocs/gcc-3.3.1/gcc/Thread-Local.html

I am not sure about native compilers on Solaris, HPUX, or AIX.

In any case, I found out that the Windows native code in Hadoop seems
to handle error messages properly. Here is what I found:

$ find ~/work/hadoop/hadoop-trunk/ -name '*.c'|xargs grephadoop how to
file a jira

FormatMessage|awk -F: '{print $1}'|sort -u

/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c


/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMappingWin.c


/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c



$ find ~/work/hadoop/hadoop-trunk/ -name '*.c'|xargs grep terror|awk
-F: '{print $1}'|sort -u

/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/exception.c


/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/SharedFileDescriptorFactory.c


/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c


/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocketWatcher.c


/home/asokan/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsMapping.c



This means you need not worry about the Windows version of terror().
You need to change 

[jira] [Created] (HADOOP-11413) Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs

2014-12-15 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-11413:
---

 Summary: Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs
 Key: HADOOP-11413
 URL: https://issues.apache.org/jira/browse/HADOOP-11413
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


in org.apache.hadoop.fs.Hdfs, the {{CryptoCodec}} is unused, and we can remove 
it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: MRAppMaster running on a docker container failed to load openssl cipher.

2014-12-15 Thread Liu, Yi A
Hi Chen He,

If native is not available, JCE will be used. So you can ignore that debug 
info. Actually CryptoCodec is not necessary in org.apache.hadoop.fs.Hdfs, and I 
have created a JIRA to fix it.

Regards,
Yi Liu

-Original Message-
From: Chen He [mailto:airb...@gmail.com] 
Sent: Tuesday, December 16, 2014 12:31 PM
To: common-dev@hadoop.apache.org
Subject: MRAppMaster running on a docker container failed to load openssl 
cipher.

Try to run teragen based on hadoop 2.6.0 using docker and met following
error:

2014-12-15 04:15:21,385 DEBUG [main]
org.apache.hadoop.crypto.OpensslCipher: Failed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError:
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
at
org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method)
at
org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:84)
at
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
at
org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
at
org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
at org.apache.hadoop.fs.Hdfs.init(Hdfs.java:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:129)
at
org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
at
org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at
org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
at
org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:448)
at
org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:470)
at
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getDefaultFileContext(JobHistoryUtils.java:247)
at
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.ensurePathInDefaultFileSystem(JobHistoryUtils.java:277)
at
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils.getConfiguredHistoryStagingDirPrefix(JobHistoryUtils.java:191)
at
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.serviceInit(JobHistoryEventHandler.java:147)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:444)
at
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1499)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1496)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1429)
2014-12-15 04:15:21,390 DEBUG [main]
org.apache.hadoop.util.PerformanceAdvisory: Crypto codec 
org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.


Regards!

Chen