Re: Thinking ahead to 2.4

2014-03-14 Thread Arun C Murthy
Update: We are now down to just 8 blockers of which 4 are already PA.

I know it's getting in good shape per our QE gang too. If things go well, I 
plan to create an RC later half of next week.

thanks,
Arun

On Mar 11, 2014, at 7:11 AM, Arun C Murthy  wrote:

> Sorry, the previous link had a bug, the correct one is: 
> http://s.apache.org/hadoop-2.4.0-blockers.
> 
> We are currently down to 12 blockers; with several PA.
> 
> thanks,
> Arun
> 
> On Mar 6, 2014, at 1:40 PM, Arun C Murthy  wrote:
> 
>> Gang,
>> 
>>  Most of the big-ticket items are already in, awesome!
>> 
>>  I'm thinking we could roll out a 2.4 RC in the next 2-3 weeks after we get 
>> through the list of blockers. Here is a handy link: 
>> http://s.apache.org/hadoop-2.4-blockers
>> 
>>  If you find more, please set Target Version to 2.4.0 and mark it a blocker. 
>> I'll try nudging people to start closing these soon, appreciate any help!
>> 
>> thanks,
>> Arun
>> 
>>  
>> On Feb 20, 2014, at 3:45 PM, Arun C Murthy  wrote:
>> 
>>> Thanks Azuryy & Suresh. I've updated the roadmap wiki to reflect this.
>>> 
>>> Arun
>>> 
>>> On Feb 20, 2014, at 2:01 PM, Suresh Srinivas  wrote:
>>> 
 Arun,
 
 Some of the previously 2.4 targeted features were made available in 2.3:
 - Heterogeneous storage support
 - Datanode cache
 
 The following are being targeted for 2.4:
 - Use protobuf for fsimge (already in)
 - ACLs (in trunk. In a week or so, this will be merged to branch-2.4)
 - Rolling upgrades (last bunch of jiras being worked in feature branch.
 Will be in 2.4 in around two weeks. Currently testing is in progress)
 
 So HDFS features should be ready in two weeks.
 
 
 On Sat, Feb 15, 2014 at 4:47 PM, Azuryy  wrote:
 
> Hi,
> I think you omit some key pieces in 2.4
> 
> Protobuf fsimage, rolling upgrade are also targeting 2.4
> 
> 
> 
> Sent from my iPhone5s
> 
>> On 2014年2月16日, at 6:59, Arun C Murthy  wrote:
>> 
>> Folks,
>> 
>> With hadoop-2.3 nearly done, I think it's time to think ahead to
> hadoop-2.4. I think it was a good idea to expedite release of 2.3 while we
> finished up pieces that didn't make it in such as HDFS Caching & Support
> for Heterogenous Storage.
>> 
>> Now, most of the key pieces incl. Resource Manager Automatic Failover
> (YARN-149), Application History Server (YARN-321) & Application Timeline
> Server (YARN-1530) are either complete or very close to done, and I think
> we will benefit with an extended test-cycle for 2.4 - similar to what
> happened with 2.2. To provide some context: 2.2 went through nearly 6 
> weeks
> of extended testing and it really helped us push out a very stable 
> release.
>> 
>> I think it will be good to create a 2.4 branch ASAP and start testing.
> As such, I plan to cut the branch early next week. With this, we should be
> good shape sometime to release 2.4 in mid-March.
>> 
>> I've updated https://wiki.apache.org/hadoop/Roadmap to reflect this.
>> 
>> Also, we should start thinking ahead to 2.5 and what folks would like to
> see in it. If we continue our 6-week cycles, we could shoot to get that 
> out
> in April.
>> 
>> Thoughts?
>> 
>> thanks,
>> Arun
>> 
>> 
>> --
>> Arun C. Murthy
>> Hortonworks Inc.
>> http://hortonworks.com/
>> 
>> 
>> 
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
> immediately
>> and delete it from your system. Thank You.
> 
 
 
 
 -- 
 http://hortonworks.com/download/
 
 -- 
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity 
 to 
 which it is addressed and may contain information that is confidential, 
 privileged and exempt from disclosure under applicable law. If the reader 
 of this message is not the intended recipient, you are hereby notified 
 that 
 any printing, copying, dissemination, distribution, disclosure or 
 forwarding of this communication is strictly prohibited. If you have 
 received this communication in error, please contact the sender 
 immediately 
 and delete it from your system. Thank You.
>>> 
>>> --
>>> Arun C. Murthy
>>> Hortonworks Inc.
>>> http://hortonworks.com/
>>> 
>>> 
>> 
>> --
>> Arun C. Murth

[jira] [Created] (HADOOP-10409) Bzip2 error message isn't clear

2014-03-14 Thread Travis Thompson (JIRA)
Travis Thompson created HADOOP-10409:


 Summary: Bzip2 error message isn't clear
 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson


If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
doesn't get compiled into libhadoop, as is expected.  This is not documented 
however and the error message thrown from {{hadoop checknative -a}} is not 
helpful.

{noformat}
[tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
native-bzip2 library system-native, will use pure-Java version
14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
Native library checking:
hadoop: true 
/export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
zlib:   true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4:true revision:99
bzip2:  false 
14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
{noformat}

You can see that it wasn't compiled in here:
{noformat}
[mislam@eat1-hcl4060 ~]$ strings 
/export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
{noformat}

After installing bzip2-devel and recompiling:
{noformat}
[tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded & initialized 
native-bzip2 library system-native
14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
Native library checking:
hadoop: true 
/export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
zlib:   true /lib64/libz.so.1
snappy: true /usr/lib64/libsnappy.so.1
lz4:true revision:99
bzip2:  true /lib64/libbz2.so.1
{noformat}
{noformat}
tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
 |grep initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
{noformat}

The error message thrown should hint that perhaps libhadoop wasn't compiled 
with the bzip2 headers installed.  It would also be nice if compile time 
dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[OT] Would anyone like to speak about Hadoop

2014-03-14 Thread Pankaj Kumar Sharma
Hey,

We (JMI Linux User Group) are organizing an event called BootCof '14 in New
Delhi, India.

One of the suggested topics is "Apache Hadoop".
The audience would be basically newbies and students.

If anyone one from this list happen to around Delhi (India), and wish to
take a session about Hadoop and it's community, please send me a private
mail.

You can know more about the event from http://bootconf.in/2014/


Thanks,

-- 
Pankaj Kumar Sharma | पंकज कुमार शर्मा
 Jamia Millia Islamia


Build failed in Jenkins: Hadoop-Common-trunk #1069

2014-03-14 Thread Apache Jenkins Server
See 

Changes:

[wang] HDFS-6102. Lower the default maximum items per directory to fix PB 
fsimage loading. Contributed by Andrew Wang.

[vinodkv] YARN-1658. Modified web-app framework to let standby RMs redirect 
web-service calls to the active RM. Contributed by Cindy Li.

[wheat9] HDFS-6084. Namenode UI - Hadoop logo link shouldn't go to hadoop 
homepage. Contributed by Travis Thompson.

[cdouglas] HADOOP-3679. Fixup assert ordering in unit tests to yield meaningful 
error
messages. Contributed by Jay Vyas

[cdouglas] YARN-1771. Reduce the number of NameNode operations during 
localization of
public resources using a cache. Contributed by Sangjin Lee

[cmccabe] HDFS-6097. Zero-copy reads are incorrectly disabled on file offsets 
above 2GB (cmccabe)

[brandonli] HDFS-6080. Improve NFS gateway performance by making rtmax and 
wtmax configurable. Contributed by Abin Shahab

[jeagles] MAPREDUCE-5456. TestFetcher.testCopyFromHostExtraBytes is missing 
(Jason Lowe via jeagles)

[arp] MAPREDUCE-5794. SliveMapper always uses default FileSystem. (Contributed 
by szetszwo)

[suresh] HDFS-5244. TestNNStorageRetentionManager#testPurgeMultipleDirs fails. 
Contributed bye Jinghui Wang.

[jeagles] MAPREDUCE-5713. InputFormat and JobConf JavaDoc Fixes (Chen He via 
jeagles)

[jeagles] HADOOP-10332. HttpServer's jetty audit log always logs 200 OK 
(jeagles)

[jlowe] MAPREDUCE-5789. Average Reduce time is incorrect on Job Overview page. 
Contributed by Rushabh S Shah

[jeagles] MAPREDUCE-5765. Update hadoop-pipes examples README (Mit Desai via 
jeagles)

--
[...truncated 66556 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir -> 

Setting project property: test.exclude.pattern -> _
Setting project property: hadoop.assemblies.version -> 3.0.0-SNAPSHOT
Setting project property: test.exclude -> _
Setting project property: distMgmtSnapshotsId -> apache.snapshots.https
Setting project property: project.build.sourceEncoding -> UTF-8
Setting project property: java.security.egd -> file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl -> 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl -> 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version -> 1.7.4
Setting project property: test.build.data -> 

Setting project property: commons-daemon.version -> 1.0.13
Setting project property: hadoop.common.build.dir -> 

Setting project property: testsThreadCount -> 4
Setting project property: maven.test.redirectTestOutputToFile -> true
Setting project property: jdiff.version -> 1.0.9
Setting project property: build.platform -> Linux-i386-32
Setting project property: project.reporting.outputEncoding -> UTF-8
Setting project property: distMgmtStagingName -> Apache Release Distribution 
Repository
Setting project property: protobuf.version -> 2.5.0
Setting project property: failIfNoTests -> false
Setting project property: protoc.path -> ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version -> 1.9
Setting project property: distMgmtStagingId -> apache.staging.https
Setting project property: distMgmtSnapshotsName -> Apache Development Snapshot 
Repository
Setting project property: ant.file -> 

[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId -> org.apache.hadoop
Setting project property: project.artifactId -> hadoop-common-project
Setting proje