FW: Early Access builds for JDK 9 b53 and JDK 8u60 b05 are available on java.net

2015-03-10 Thread Steve Loughran

Looking ahead to Java 9, here's where the builds are up for D/L

From: Rory O'Donnell

Subject: Early Access builds for JDK 9 b53 and JDK 8u60 b05 are available on 
java.net


Hi Andrew,

Early Access build for JDK 9 b53https://jdk9.java.net/download/ available on 
java.net, summary of  changes are listed here 
http://www.java.net/download/jdk9/changes/jdk9-b53.html

Early Access build for JDK 8u60 b05http://jdk8.java.net/download.html is 
available on java.net, summary of changes are listed 
here.http://www.java.net/download/jdk8u60/changes/jdk8u60-b05.html

I'd also like to use this opportunity to point you to JEP 238: Multi-Version 
JAR Files [0],
which is currently a Candidate JEP for JDK 9.

It's goal is to extend the JAR file format to allow multiple, JDK 
release-specific versions of class
files to coexist in a single file. An additional goal is to backport the 
run-time changes to
JDK 8u60, thereby enabling JDK 8 to consume multi-version JARs. For a detailed 
discussion,
please see the corresponding thread on the core-libs-dev mailing list. [1]

Please keep in mind that a JEP in the Candidate state is merely an idea worthy 
of consideration
by JDK Release Projects and related efforts; there is no commitment that it 
will be delivered in
any particular release.

Comments, questions, and suggestions are welcome on the corelibs-dev mailing 
list. (If you
haven't already subscribed to that list then please do so first, otherwise your 
message will be
discarded as spam.)

Rgds,Rory

[0] http://openjdk.java.net/jeps/238
[1] 
http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031461.html



--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


[jira] [Created] (HADOOP-11702) Allow StringSignerSecretProvider to read secret from a file

2015-03-10 Thread Arun Suresh (JIRA)
Arun Suresh created HADOOP-11702:


 Summary: Allow StringSignerSecretProvider to read secret from a 
file
 Key: HADOOP-11702
 URL: https://issues.apache.org/jira/browse/HADOOP-11702
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh
Priority: Minor


Currently, The {{StringSignerSecretProvider}} has to be provider the secret in 
a configuration file. This patch allows the secret string to be read from a 
separate file as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11641) root directory quota can be set but can not be clear

2015-03-10 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru resolved HADOOP-11641.
--
Resolution: Duplicate

 root directory quota can be set but can not be clear
 

 Key: HADOOP-11641
 URL: https://issues.apache.org/jira/browse/HADOOP-11641
 Project: Hadoop Common
  Issue Type: Bug
Reporter: prophy Yan

 the name quota of hdfs root directory '/' could be set to 10 or more less, 
 but it can not to clear. so users can not put or make any files and 
 directories. how to set '/' name quota to max?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


CfP 10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '15)

2015-03-10 Thread VHPC 15
=
CALL FOR PAPERS

10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '15)

held in conjunction with Euro-Par 2015, August 24-28, Vienna, Austria

(Springer LNCS)
=

Date: August 25, 2015
Workshop URL: http://vhpc.org

Paper Submission Deadline: May 22, 2015


CALL FOR PAPERS

Virtualization technologies constitute a key enabling factor for
flexible resource
management in modern data centers, cloud environments, and increasingly in
HPC as well. Providers need to dynamically manage complex infrastructures in a
seamless fashion for varying workloads and hosted applications, independently of
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to manage
vast computing and networking resources dynamically and close to the marginal
cost of providing the services, which is unprecedented in the history
of scientific
and commercial computing.

Various virtualization technologies contribute to the overall picture
in different
ways: machine virtualization, with its capability to enable
consolidation of multiple
under-utilized servers with heterogeneous software and operating systems (OSes),
and its capability to live-migrate a fully operating virtual machine
(VM) with a very
short downtime, enables novel and dynamic ways to manage physical servers;
OS-level virtualization, with its capability to isolate multiple user-space
environments and to allow for their co-existence within the same OS kernel,
promises to provide many of the advantages of machine virtualization with high
levels of responsiveness and performance; I/O Virtualization allows physical
network adapters to take traffic from multiple VMs; network
virtualization, with its
capability to create logical network overlays that are independent of the
underlying physical topology and IP addressing, provides the fundamental
ground on top of which evolved network services can be realized with an
unprecedented level of dynamicity and flexibility; These technologies
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified Service-
Level Agreements (SLAs), as needed for industrial-grade provided services.
Indeed, among emerging and increasingly interesting application domains
for virtualization, we can find big-data application workloads in cloud
infrastructures, interactive and real-time multimedia services in the cloud,
including real-time big-data streaming platforms such as used in real-time
analytics supporting nowadays a plethora of application domains. Distributed
cloud infrastructures promise to offer unprecedented responsiveness levels for
hosted applications, but that is only possible if the underlying virtualization
technologies can overcome most of the latency impairments typical of current
virtualized infrastructures (e.g., far worse tail-latency).

The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing
the challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.

The workshop will be one day in length, composed of 20 min paper presentations,
each followed by 10 min discussion sections, and lightning talks, limited to 5
minutes. Presentations may be accompanied by interactive demonstrations.

TOPICS

Topics of interest include, but are not limited to:

- Virtualization in supercomputing environments, HPC clusters, cloud
HPC and grids
- Optimizations of virtual machine monitor platforms, hypervisors and
OS-level virtualization
- Hypervisor and network virtualization QoS and SLAs
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Performance measurement, modelling and monitoring of
virtualized/cloud workloads
- Programming models for virtualized environments
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators,
GPUs and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC
in the cloud
- Topology management and optimization for distributed virtualized applications
- Cluster provisioning in the cloud and cloud bursting
- Adaptation of emerging HPC technologies (high performance networks,
RDMA, etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- 

Re: Hadoop 3.x: what about shipping trunk as a 2.x release in 2015?

2015-03-10 Thread Allen Wittenauer

On Mar 10, 2015, at 12:40 PM, Karthik Kambatla ka...@cloudera.com wrote:

 
 Are we okay with breaking other forms of compatibility for Hadoop-3, like
 behavior, dependencies, JDK, classpath, environment? I think so. Are we
 okay with breaking these forms of compatibility in future Hadoop-2.x?
 Likely not. Does our compatibility policy allow these changes in 2.x?
 Mostly yes, but that is because we don't have policies for a lot of these
 things that affect end-users.

I’d disagree with that last statement.  The compatibility guarantees in 
Compatibility.md covers all of these examples. 

Changing the JDK:
* Build Artifacts
* Hardware/Software Requirements
* Hadoop ABI

API compatibility:
* Java API
* Build artifacts
* Hadoop ABI

Wire compatibility violations:
* Wire compatibility
* Hadoop ABI

Environment:
* Depends upon what is meant by that, but it’s pretty much all of the 
above, plus CLI, env var, etc.


All of these are very clear that this stuff should change in a major 
version only in order not to disrupt our users.  The only one we can change are 
dependencies, covered under class path:

Currently, there is NO policy on when Hadoop's dependencies can 
change.”

But it is heavily implied that this is a bad thing to do:

Adding new dependencies or updating the version of existing dependencies may 
interfere with those in applications' class paths.





[jira] [Created] (HADOOP-11701) RPC authentication fallback option should support enabling fallback only for specific connections.

2015-03-10 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-11701:
--

 Summary: RPC authentication fallback option should support 
enabling fallback only for specific connections.
 Key: HADOOP-11701
 URL: https://issues.apache.org/jira/browse/HADOOP-11701
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc, security
Reporter: Chris Nauroth


We currently support the {{ipc.client.fallback-to-simple-auth-allowed}} 
configuration property so that a client configured with security can fallback 
to simple authentication when communicating with an unsecured server.  This is 
a global property that enables the fallback behavior for all RPC connections, 
even though fallback is only desirable for clusters that are known to be 
unsecured.  This issue proposes to support configurability of fallback on 
specific connections, not all connections globally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


upstream jenkins build broken?

2015-03-10 Thread Colin P. McCabe
Hi all,

A very quick (and not thorough) survey shows that I can't find any
jenkins jobs that succeeded from the last 24 hours.  Most of them seem
to be failing with some variant of this message:

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean)
on project hadoop-hdfs: Failed to clean project: Failed to delete
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
- [Help 1]

Any ideas how this happened?  Bad disk, unit test setting wrong permissions?

Colin


Re: upstream jenkins build broken?

2015-03-10 Thread Aaron T. Myers
Hey Colin,

I asked Andrew Bayer, who works with Apache Infra, what's going on with
these boxes. He took a look and concluded that some perms are being set in
those directories by our unit tests which are precluding those files from
getting deleted. He's going to clean up the boxes for us, but we should
expect this to keep happening until we can fix the test in question to
properly clean up after itself.

To help narrow down which commit it was that started this, Andrew sent me
this info:

/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-
Build/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3/ has
500 perms, so I'm guessing that's the problem. Been that way since 9:32 UTC
on March 5th.

--
Aaron T. Myers
Software Engineer, Cloudera

On Tue, Mar 10, 2015 at 1:24 PM, Colin P. McCabe cmcc...@apache.org wrote:

 Hi all,

 A very quick (and not thorough) survey shows that I can't find any
 jenkins jobs that succeeded from the last 24 hours.  Most of them seem
 to be failing with some variant of this message:

 [ERROR] Failed to execute goal
 org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean)
 on project hadoop-hdfs: Failed to clean project: Failed to delete

 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data3
 - [Help 1]

 Any ideas how this happened?  Bad disk, unit test setting wrong
 permissions?

 Colin



Re: Hadoop 3.x: what about shipping trunk as a 2.x release in 2015?

2015-03-10 Thread Colin P. McCabe
Er, that should read as Allen commented  C.

On Tue, Mar 10, 2015 at 11:55 AM, Colin P. McCabe cmcc...@apache.org wrote:
 Hi Arun,

 Not all changes which are incompatible can be fixed-- sometimes an
 incompatibility is a necessary part of a change.  For example, taking
 a really old library dependency with known security issues off the
 CLASSPATH will create incompatibilities, but it's also necessary.  A
 minimum JDK version bump also falls in that category.  There are also
 cases where we need to drop support for really obsolete and baroque
 features from the past.  For example, it would be nice if we could
 finally get rid of the code to read pre-transactional edit logs.  It's
 a substantial amount of code.  We could argue that we should just
 support legacy stuff forever, but code quality will suffer.

 These changes need to be made sooner or later, and a major version
 bump is an ideal place to make them.  I think that making these
 changes in a 2.x release is hostile to operators, as Alan commented.
 That's what we're trying to avoid by discussing Hadoop 3.x.

 Colin

 On Mon, Mar 9, 2015 at 3:54 PM, Arun Murthy a...@hortonworks.com wrote:
 Colin,

  Do you have a list of incompatible changes other than the shell-script 
 rewrite? If we do have others we'd have to fix them anyway for the current 
 plan on hadoop-3.x right? So, I don't see the difference?

 Arun

 
 From: Colin P. McCabe cmcc...@apache.org
 Sent: Monday, March 09, 2015 3:05 PM
 To: hdfs-...@hadoop.apache.org
 Cc: mapreduce-...@hadoop.apache.org; common-dev@hadoop.apache.org; 
 yarn-...@hadoop.apache.org
 Subject: Re: Hadoop 3.x: what about shipping trunk as a 2.x release in 2015?

 Java 7 will be end-of-lifed in April 2015.  I think it would be unwise
 to plan a new Hadoop release against a version of Java that is almost
 obsolete and (soon) no longer receiving security updates.  I think
 people will be willing to roll out a new version of Java for Hadoop
 3.x.

 Similarly, the whole point of bumping the major version number is the
 ability to make incompatible changes.  There are already a bunch of
 incompatible changes in the trunk branch.  Are you proposing to revert
 those?  Or push them into newly created feature branches?  This
 doesn't seem like a good idea to me.

 I would be in favor of backporting targetted incompatible changes from
 trunk to branch-2.  For example, we could consider pulling in Allen's
 shell script rewrite.  But pulling in all of trunk seems like a bad
 idea at this point, if we want a 2.x release.

 best,
 Colin

 On Mon, Mar 9, 2015 at 2:15 PM, Steve Loughran ste...@hortonworks.com 
 wrote:

 If 3.x is going to be Java 8  not backwards compatible, I don't expect 
 anyone wanting to use this in production until some time deep into 2016.

 Issue: JDK 8 vs 7

 It will require Hadoop clusters to move up to Java 8. While there's dev 
 pull for this, there's ops pull against this: people are still in the 
 moving-off Java 6 phase due to that it's working, don't update it 
 philosophy. Java 8 is compelling to us coders, but that doesn't mean ops 
 want it.

 You can run JDK-8 code in a YARN cluster running on Hadoop 2.7 *today*, the 
 main thing is setting up JAVA_HOME. That's something we could make easier 
 somehow (maybe some min Java version field in resource requests that will 
 let apps say java 8, java 9, ...). YARN could not only set up JVM paths, it 
 could fail-fast if a Java version wasn't available.

 What we can't do in hadoop coretoday  is set javac.version=1.8  use java 8 
 code. Downstream code ca do that (Hive, etc); they just need to accept that 
 they don't get to play on JDK7 clusters if they embrace l-expressions.

 So...we need to stay on java 7 for some time due to ops pull; downstream 
 apps get to choose what they want. We can/could enhance YARN to make JVM 
 choice more declarative.

 Issue: Incompatible changes

 Without knowing what is proposed for an incompatible classpath change, I 
 can't say whether this is something that could be made optional. If it 
 isn't, then it is a python-3 class option, rewrite your code event, which 
 is going to be particularly traumatic to things like Hive that already do 
 complex CP games. I'm currently against any mandatory change here, though 
 would love to see an optional one. And if optional, it ceases to become an 
 incompatible change...

 Issue: Getting trunk out the door

 The main diff from branch-2 and trunk is currently the bash script changes. 
 These don't break client apps. May or may not break bigtop  other 
 downstream hadoop stacks, but developers don't need to worry about this:  
 no recompilation necessary

 Proposed: ship trunk as a 2.x release, compatible with JDK7  Java code.

 It seems to me that I could go

 git checkout trunk
 mvn versions:set -DnewVersion=2.8.0-SNAPSHOT

 We'd then have a version of Hadoop-trunk we could ship later this year, 
 compatible at the JDK and API 

Re: Hadoop 3.x: what about shipping trunk as a 2.x release in 2015?

2015-03-10 Thread Karthik Kambatla
On Mon, Mar 9, 2015 at 2:15 PM, Steve Loughran ste...@hortonworks.com
wrote:


 If 3.x is going to be Java 8  not backwards compatible, I don't expect
 anyone wanting to use this in production until some time deep into 2016.

 Issue: JDK 8 vs 7

 It will require Hadoop clusters to move up to Java 8. While there's dev
 pull for this, there's ops pull against this: people are still in the
 moving-off Java 6 phase due to that it's working, don't update it
 philosophy. Java 8 is compelling to us coders, but that doesn't mean ops
 want it.

 You can run JDK-8 code in a YARN cluster running on Hadoop 2.7 *today*,
 the main thing is setting up JAVA_HOME. That's something we could make
 easier somehow (maybe some min Java version field in resource requests that
 will let apps say java 8, java 9, ...). YARN could not only set up JVM
 paths, it could fail-fast if a Java version wasn't available.

 What we can't do in hadoop coretoday  is set javac.version=1.8  use java
 8 code. Downstream code ca do that (Hive, etc); they just need to accept
 that they don't get to play on JDK7 clusters if they embrace l-expressions.

 So...we need to stay on java 7 for some time due to ops pull; downstream
 apps get to choose what they want. We can/could enhance YARN to make JVM
 choice more declarative.

 Issue: Incompatible changes

 Without knowing what is proposed for an incompatible classpath change, I
 can't say whether this is something that could be made optional. If it
 isn't, then it is a python-3 class option, rewrite your code event, which
 is going to be particularly traumatic to things like Hive that already do
 complex CP games. I'm currently against any mandatory change here, though
 would love to see an optional one. And if optional, it ceases to become an
 incompatible change...


We should probably start qualifying the word incompatible more often.

Are we okay with an API incompatible Hadoop-3? No.

Are we okay with an wire-incompatible Hadoop-3? Likely not.

Are we okay with breaking other forms of compatibility for Hadoop-3, like
behavior, dependencies, JDK, classpath, environment? I think so. Are we
okay with breaking these forms of compatibility in future Hadoop-2.x?
Likely not. Does our compatibility policy allow these changes in 2.x?
Mostly yes, but that is because we don't have policies for a lot of these
things that affect end-users. The reason we don't have a policy, IMO, is a
combination of (1) we haven't spent enough time thinking about them, (2)
without things like classpath isolation, we end up tying developers' hands
if we don't let them change the dependencies. I propose we update our
compat guidelines to be stricter, and do whatever is required to get there.
Is it okay to change our compat guidelines incompatibly? May be, it
warrants a Hadoop-3? I don't know yet.

And, some other policies like bumping min JDK requirement are allowed in
minor releases. Users might be okay with certain JDK bumps (6 to 7, since
no one seems to be using 6 anymore), but users most definitely care about
some other bumps (7 - 8). If we want to remove this subjective evaluation,
I am open to requiring a major version for JDK upgrades (not support, but
language features) even if it meant we have to wait until 3.0 for JDK
upgrade.




 Issue: Getting trunk out the door

 The main diff from branch-2 and trunk is currently the bash script
 changes. These don't break client apps. May or may not break bigtop  other
 downstream hadoop stacks, but developers don't need to worry about this:
 no recompilation necessary

 Proposed: ship trunk as a 2.x release, compatible with JDK7  Java code.

 It seems to me that I could go

 git checkout trunk
 mvn versions:set -DnewVersion=2.8.0-SNAPSHOT

 We'd then have a version of Hadoop-trunk we could ship later this year,
 compatible at the JDK and API level with the existing java code  JDK7+
 clusters.

 A classpath fix that is optional/compatible can then go out on the 2.x
 line, saving the 3.x tag for something that really breaks things, forces
 all downstream apps to set up new hadoop profiles, have separate modules 
 generally hate the hadoop dev team

 This lets us tick off the recent trunk release and fixed shell scripts
 items, pushing out those benefits to people sooner rather than later, and
 puts off the Hello, we've just broken your code event for another 12+
 months.

 Comments?

 -Steve






-- 
Karthik Kambatla
Software Engineer, Cloudera Inc.

http://five.sentenc.es


Re: Hadoop 3.x: what about shipping trunk as a 2.x release in 2015?

2015-03-10 Thread Colin P. McCabe
Hi Arun,

Not all changes which are incompatible can be fixed-- sometimes an
incompatibility is a necessary part of a change.  For example, taking
a really old library dependency with known security issues off the
CLASSPATH will create incompatibilities, but it's also necessary.  A
minimum JDK version bump also falls in that category.  There are also
cases where we need to drop support for really obsolete and baroque
features from the past.  For example, it would be nice if we could
finally get rid of the code to read pre-transactional edit logs.  It's
a substantial amount of code.  We could argue that we should just
support legacy stuff forever, but code quality will suffer.

These changes need to be made sooner or later, and a major version
bump is an ideal place to make them.  I think that making these
changes in a 2.x release is hostile to operators, as Alan commented.
That's what we're trying to avoid by discussing Hadoop 3.x.

Colin

On Mon, Mar 9, 2015 at 3:54 PM, Arun Murthy a...@hortonworks.com wrote:
 Colin,

  Do you have a list of incompatible changes other than the shell-script 
 rewrite? If we do have others we'd have to fix them anyway for the current 
 plan on hadoop-3.x right? So, I don't see the difference?

 Arun

 
 From: Colin P. McCabe cmcc...@apache.org
 Sent: Monday, March 09, 2015 3:05 PM
 To: hdfs-...@hadoop.apache.org
 Cc: mapreduce-...@hadoop.apache.org; common-dev@hadoop.apache.org; 
 yarn-...@hadoop.apache.org
 Subject: Re: Hadoop 3.x: what about shipping trunk as a 2.x release in 2015?

 Java 7 will be end-of-lifed in April 2015.  I think it would be unwise
 to plan a new Hadoop release against a version of Java that is almost
 obsolete and (soon) no longer receiving security updates.  I think
 people will be willing to roll out a new version of Java for Hadoop
 3.x.

 Similarly, the whole point of bumping the major version number is the
 ability to make incompatible changes.  There are already a bunch of
 incompatible changes in the trunk branch.  Are you proposing to revert
 those?  Or push them into newly created feature branches?  This
 doesn't seem like a good idea to me.

 I would be in favor of backporting targetted incompatible changes from
 trunk to branch-2.  For example, we could consider pulling in Allen's
 shell script rewrite.  But pulling in all of trunk seems like a bad
 idea at this point, if we want a 2.x release.

 best,
 Colin

 On Mon, Mar 9, 2015 at 2:15 PM, Steve Loughran ste...@hortonworks.com wrote:

 If 3.x is going to be Java 8  not backwards compatible, I don't expect 
 anyone wanting to use this in production until some time deep into 2016.

 Issue: JDK 8 vs 7

 It will require Hadoop clusters to move up to Java 8. While there's dev pull 
 for this, there's ops pull against this: people are still in the moving-off 
 Java 6 phase due to that it's working, don't update it philosophy. Java 8 
 is compelling to us coders, but that doesn't mean ops want it.

 You can run JDK-8 code in a YARN cluster running on Hadoop 2.7 *today*, the 
 main thing is setting up JAVA_HOME. That's something we could make easier 
 somehow (maybe some min Java version field in resource requests that will 
 let apps say java 8, java 9, ...). YARN could not only set up JVM paths, it 
 could fail-fast if a Java version wasn't available.

 What we can't do in hadoop coretoday  is set javac.version=1.8  use java 8 
 code. Downstream code ca do that (Hive, etc); they just need to accept that 
 they don't get to play on JDK7 clusters if they embrace l-expressions.

 So...we need to stay on java 7 for some time due to ops pull; downstream 
 apps get to choose what they want. We can/could enhance YARN to make JVM 
 choice more declarative.

 Issue: Incompatible changes

 Without knowing what is proposed for an incompatible classpath change, I 
 can't say whether this is something that could be made optional. If it 
 isn't, then it is a python-3 class option, rewrite your code event, which 
 is going to be particularly traumatic to things like Hive that already do 
 complex CP games. I'm currently against any mandatory change here, though 
 would love to see an optional one. And if optional, it ceases to become an 
 incompatible change...

 Issue: Getting trunk out the door

 The main diff from branch-2 and trunk is currently the bash script changes. 
 These don't break client apps. May or may not break bigtop  other 
 downstream hadoop stacks, but developers don't need to worry about this:  no 
 recompilation necessary

 Proposed: ship trunk as a 2.x release, compatible with JDK7  Java code.

 It seems to me that I could go

 git checkout trunk
 mvn versions:set -DnewVersion=2.8.0-SNAPSHOT

 We'd then have a version of Hadoop-trunk we could ship later this year, 
 compatible at the JDK and API level with the existing java code  JDK7+ 
 clusters.

 A classpath fix that is optional/compatible can then go out on the 2.x line, 
 saving