[jira] [Created] (HADOOP-12373) Add support for foodcritic

2015-09-01 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-12373:


 Summary: Add support for foodcritic
 Key: HADOOP-12373
 URL: https://issues.apache.org/jira/browse/HADOOP-12373
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Travis Thompson


Foodcritic is the chef "best practices" linter 
(http://acrmp.github.io/foodcritic/)

Successful output:

{noformat}
± master ✗  → foodcritic .

± master ✗  → echo $? 
0
{noformat}

Unsuccessful output:

{noformat}
± master ✗  → foodcritic .
FC048: Prefer Mixlib::ShellOut: ./recipes/server.rb:482
± master ✗  → echo $?
0
{noformat}

Here's what caused the failure:

{noformat}
482 `echo "this is bad pratice"`
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12372) Add support for rspec with junit

2015-09-01 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-12372:


 Summary: Add support for rspec with junit
 Key: HADOOP-12372
 URL: https://issues.apache.org/jira/browse/HADOOP-12372
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Travis Thompson


rspec already supports junit format with a plugin, just run like this:

{noformat}
rspec -f JUnit -o results.xml
{noformat}

This will allow any generic rspec to work as well as chefspec (chef unit 
testing).  This will also soon allow for kitchen test to work as well (it's 
also using rspec, but currently isn't configurable, see: 
https://github.com/test-kitchen/busser-serverspec/issues/9)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: In hindsight... Re: Thinking ahead to hadoop-2.6

2014-09-17 Thread Travis Thompson
There's actually an umbrella JIRA to track issues with JDK8
(HADOOP-11090), in case anyone missed it.

At LinkedIn we've been running our Hadoop 2.3 deployment on JDK8 for
about a month now with some mixed results.  It definitely works but
there are issues, mostly around virtual memory exploding.  The reason
we took the jump early is there is a company wide push to move to JDK8
ASAP, I suspect this isn't something unique to LinkedIn.   To get this
to work with security enabled, we've had to apply patches not even in
trunk yet because they break JDK6 compatibility.

From my perspective, based on what I've seen and people I've talked
to, there is a huge push to move to JDK8 ASAP so it's becoming
increasingly urgent to at least get support to run on JDK8.

On Wed, Sep 17, 2014 at 9:55 AM, Allen Wittenauer a...@altiscale.com wrote:

 On Sep 17, 2014, at 2:47 AM, Steve Loughran ste...@hortonworks.com wrote:

 I don't agree. Certainly the stuff I got into Hadoop 2.5 nailed down the
 filesystem binding with more tests than ever before.

 FWIW, based upon my survey of JIRA, there are a lot of unit test 
 fixes that are only in trunk.

 But I am also aware of large organisations that are still on Java 6.
 Giving a clear roadmap move to Java 7 now, java 8 in XX months can help
 them plan.

 Planning is a big thing.  That’s one of the reasons why it’d be 
 prudent to start doing 3.0+JDK8 now as well.  Even if April slips, other 
 projects and orgs are already moving to 8.  These people wonder what our 
 plans are so that they can run one JVM.  Right now our answer is ¯\_(ツ)_/¯ .

 I’m sure I can dig up a user running Hadoop 0.13 because it ran on 
 JDK5.  That doesn’t mean the open source project should stall because certain 
 orgs don’t/can't upgrade.


Drop the 2.6.0 release, branch trunk, and start rolling a
 3.0.0-alpha with JDK8 as the minimum.  2.5.1 becomes the base for all
 sustaining work.  This gives the rest of the community time to move to JDK8
 if they haven’t already.  For downstream vendors, it gives a roadmap for
 their customers who will be asking about JDK8 sooner rather than later.  By
 the time 3.0 stabilizes, we’re probably looking at April, which is perfect
 timing.


 That delays getting stuff out too much; if april slips it becomes a long
 time since an ASF release came out.

 I’m assuming you specifically mean a ‘stable’ release.  If, as 
 everyone seems to be saying, that 3.x doesn’t have that much different than 
 2.x, doesn’t this mean that 3.x should be stable much quicker than 2.x took?  
 In other words, if 2.5 is stable and the biggest differences between it and 
 trunk is the majority of code (450+ JIRAs as of yesterday afternoon) that 
 just also happens to be in 2.6, doesn’t it mean 2.6 is also extremely 
 unstable?  (Thus supporting my conjecture that 2.6 is going to be a 
 problematic release?)

 Saying you must run on Java 8 for
 this will only scare people off and hold back adoption of 3.x, leaving 2.5
 as the last release that ends up being used for a while; the new 1.0.4

 From the outside, trunk looks a lot of 0.21 already.  From what I can 
 tell, there is zero motivation to get it out the door and on a roadmap. 
 Primarily because there is little different between trunk and branch-2.  This 
 is a very dangerous place to be as those few differences, some measured in 
 years old, rot and wither. :(

 Here's an alternative

 -2.6 on java 6, announce EOL for Java 6 support
 -2.7 on Java 7, state that the lifespan of j7 support will be some bounded
 time period (12-18 mo)
 -trunk to build and test on Java 8 in jenkins alongside java 7. For that to
 be useful someone needs to volunteer to care about build failures. are you
 volunteering, Allen?

 This seems reasonable, except what release should folks who *require* 
 java 8 use? Nightly trunk+patches builds? How do downstream projects test?  
 Should JDK8 fixes be going into a branch?  (I’m making the assumption that 
 fixes for JDK8 are not backward compatible with JDK7.  Hopefully they are, 
 but given our usage of private APIs…)

 I’ve been approached by a few people over the past month+ if I’d be 
 interested in or will be RM’ing 3.0.  I’m seriously considering it esp given 
 a) it’d be a nice learning experience for me  b) my “day job” makes it 
 practical time-wise c) I seem to be the only one concerned enough about quite 
 a bit of stale code  to get it out the door.

 FWIW, I’m in the process of moving my test vm to JDK8 to see how bad 
 the damage truly is right now. Based on others, it seems security doesn’t 
 work, which is a pretty big deal.  I can certainly start posting trunk builds 
 on people.apache.org if folks are interested.

 -we switch trunk to Java 7 NOW. That doesn't mean a rewrite fest going
 through all catch() statements making them multicatch, and the same for
 string case.

 Yup.  There’s little reason *not* to switch 

[jira] [Created] (HADOOP-11098) [jdk8] MaxDirectMemorySize default changed between JDK7 and 8

2014-09-16 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-11098:


 Summary: [jdk8] MaxDirectMemorySize default changed between JDK7 
and 8
 Key: HADOOP-11098
 URL: https://issues.apache.org/jira/browse/HADOOP-11098
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Travis Thompson


I noticed this because the NameNode UI shows Max Non Heap Memory which after 
some digging I found correlates to MaxDirectMemorySize.

JDK7
{noformat}
Heap Memory used 16.75 GB of 23 GB Heap Memory. Max Heap Memory is 23.7 GB.
Non Heap Memory used 57.32 MB of 67.38 MB Commited Non Heap Memory. Max Non 
Heap Memory is 130 MB. 
{noformat}

JDK8
{noformat}
Heap Memory used 3.02 GB of 7.65 GB Heap Memory. Max Heap Memory is 23.7 GB.
Non Heap Memory used 103.12 MB of 104.41 MB Commited Non Heap Memory. Max Non 
Heap Memory is -1 B. 
{noformat}

More information in first comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.4.0

2014-04-01 Thread Travis Thompson
+1 non-binding

Built from git. Started with 120 node 2.3.0 cluster with security and
non HA, ran upgrade (non rolling) to 2.4.0.  Confirmed fsimage is OK and
HDFS successfully upgraded.  Also successfully ran some pig jobs and
mapreduce examples.  Haven't found any issues yet but will continue
testing.  Did not test Timeline Server since I'm using security.

Thanks,
Travis

On 03/31/2014 02:24 AM, Arun C Murthy wrote:
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.4.0 that I would like to 
 get released.
 
 The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.4.0-rc0
 The RC tag in svn is here: 
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.4.0-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7 days.
 
 thanks,
 Arun
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 


[jira] [Created] (HADOOP-10452) BUILDING.txt needs to be updated

2014-03-31 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-10452:


 Summary: BUILDING.txt needs to be updated
 Key: HADOOP-10452
 URL: https://issues.apache.org/jira/browse/HADOOP-10452
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.3.0
Reporter: Travis Thompson
Priority: Minor


BUILDING.txt is missing some information about native compression libraries.  
Noticeably if you are missing zlib/bzip2/snappy devel libraries, those will get 
silently skipped unless you pass the {{-Drequire.$LIB}} option (e.x. 
{{-Drequire.snappy}}).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10409) Bzip2 error message isn't clear

2014-03-14 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-10409:


 Summary: Bzip2 error message isn't clear
 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson


If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
doesn't get compiled into libhadoop, as is expected.  This is not documented 
however and the error message thrown from {{hadoop checknative -a}} is not 
helpful.

{noformat}
[tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
native-zlib library
Native library checking:
hadoop: true 
/export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
zlib:   true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4:true revision:99
bzip2:  false 
14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
{noformat}

You can see that it wasn't compiled in here:
{noformat}
[mislam@eat1-hcl4060 ~]$ strings 
/export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
{noformat}

After installing bzip2-devel and recompiling:
{noformat}
[tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
native-bzip2 library system-native
14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
native-zlib library
Native library checking:
hadoop: true 
/export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
zlib:   true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4:true revision:99
bzip2:  true /lib64/libbz2.so.1
{noformat}
{noformat}
tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
 |grep initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
{noformat}

The error message thrown should hint that perhaps libhadoop wasn't compiled 
with the bzip2 headers installed.  It would also be nice if compile time 
dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: [VOTE] Release Apache Hadoop 2.3.0

2014-02-13 Thread Travis Thompson
I suspect it's your setup because I'm able to run the PI example without 
errors.  Running on RHEL 6.3 w/ JDK7.

On Feb 13, 2014, at 3:05 PM, Alejandro Abdelnur t...@cloudera.com
 wrote:

 Trying to run the PI MapReduce example using RC0 the job is failing,
 looking at the NM logs I'm getting the following.
 
 I believe it may be something in my setup as many already test MR jobs with
 this RC successfully, but couldn't figure out yet. Running on OSX 10.9.1
 using JDK7.
 
 Thanks.
 
 
 --
 
 2014-02-13 13:12:06,092 INFO
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
 Initializing user tucu
 
 2014-02-13 13:12:06,184 INFO
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying
 from
 /tmp/hadoop-tucu/nm-local-dir/nmPrivate/container_1392325918406_0001_01_01.tokens
 to
 /tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001/container_1392325918406_0001_01_01.tokens
 
 2014-02-13 13:12:06,184 INFO
 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set
 to
 /tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001
 =
 file:/tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001
 
 2014-02-13 13:12:06,957 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 DEBUG: FAILED {
 hdfs://localhost:9000/tmp/hadoop-yarn/staging/tucu/.staging/job_1392325918406_0001/job.jar,
 1392325925016, PATTERN, (?:classes/|lib/).* }, rename destination
 /tmp/hadoop-tucu/nm-local-dir/usercache/tucu/appcache/application_1392325918406_0001/filecache/10
 already exists.
 
 2014-02-13 13:12:06,959 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
 Resource
 hdfs://localhost:9000/tmp/hadoop-yarn/staging/tucu/.staging/job_1392325918406_0001/job.jar
 transitioned from DOWNLOADING to FAILED
 
 --
 
 
 On Thu, Feb 13, 2014 at 1:06 PM, Sandy Ryza sandy.r...@cloudera.com wrote:
 
 +1 (non-binding)
 
 Built from source and ran jobs on a pseudo-distributed cluster with the
 Fair Scheduler
 
 
 On Wed, Feb 12, 2014 at 7:56 PM, Xuan Gong xg...@hortonworks.com wrote:
 
 +1 (non-binding)
 
 downloaded the source tar ball, built, ran a number of MR jobs on a
 single-node cluster and checked the job history from job history server.
 
 
 On Wed, Feb 12, 2014 at 7:53 PM, Gera Shegalov g...@shegalov.com
 wrote:
 
 +1 non-binding
 
 - checked out the rc tag and built from source
 - deployed a pseudo-distributed cluster with
 yarn.resourcemanager.recovery.enabled=true
 - ran a sleep job with multiple map waves and a long reducer
 -- SIGKILL'd AM at various points and verified AM restart
 -- SIGKILL'd RM at various points and verified RM restart
 - checked some ui issues we had fixed.
 - verified the new restful plain text container log NM-WS
 
 Thanks,
 
 Gera
 
 
 On Tue, Feb 11, 2014 at 6:49 AM, Arun C Murthy a...@hortonworks.com
 wrote:
 
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.3.0 that I would
 like
 to get released.
 
 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.3.0-rc0
 The RC tag in svn is here:
 
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7
 days.
 
 thanks,
 Arun
 
 PS: Thanks to Andrew, Vinod  Alejandro for all their help in various
 release activities.
 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
 entity
 to
 which it is addressed and may contain information that is
 confidential,
 privileged and exempt from disclosure under applicable law. If the
 reader
 of this message is not the intended recipient, you are hereby
 notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.
 
 
 
 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.
 
 
 
 
 
 -- 
 Alejandro



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: [VOTE] Release Apache Hadoop 2.3.0

2014-02-11 Thread Travis Thompson
Everything looks good so far, running on 100 nodes with security enabled.

I've found two minor issues I've found with the new Namenode UI so far and will 
work on them over the next few days:

HDFS-5934
HDFS-5935

Thanks,

Travis

On Feb 11, 2014, at 4:53 PM, Mohammad Islam misla...@yahoo.com
 wrote:

 Thanks Arun for the initiative.
 
 +1 non-binding.
 
 
 I tested the followings:
 1. Build package from the source tar.
 2. Verified with md5sum
 3. Verified with gpg 
 4. Basic testing
 
 Overall, good to go.
 
 Regards,
 Mohammad
 
 
 
 
 On Tuesday, February 11, 2014 2:07 PM, Chen He airb...@gmail.com wrote:
 
 +1, non-binding
 successful compiled on MacOS 10.7
 deployed to Fedora 7 and run test job without any problem.
 
 
 
 On Tue, Feb 11, 2014 at 8:49 AM, Arun C Murthy a...@hortonworks.com wrote:
 
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.3.0 that I would like
 to get released.
 
 The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.3.0-rc0
 The RC tag in svn is here:
 https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7 days.
 
 thanks,
 Arun
 
 PS: Thanks to Andrew, Vinod  Alejandro for all their help in various
 release activities.
 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



signature.asc
Description: Message signed with OpenPGP using GPGMail