> On 19 May 2015, at 17:59, Colin P. McCabe <cmcc...@apache.org> wrote:
> 
> I agree that the protobuf 2.4.1 -> 2.5.0 transition could have been
> handled a lot better by Google.  Specifically, since it was an
> API-breaking upgrade, it should have been a major version bump for the
> Java library version.  I also feel that removing the download links
> for the old versions of the native libraries was careless, and
> certainly burned some of our Hadoop users.
> 
> However, I don't see any reason to believe that protobuf 2.6 will not
> be wire-compatible with earlier versions.  Google has actually been
> pretty good about preserving wire-compatibility... just not about API
> compatibility.  If we want to get a formal statement from the project,
> we can, but I would be pretty shocked if they decided to change the
> protocol in a backwards-incompatible way in a minor version release.

that's what they have done well: wire formats don't break (though you have the 
freedom to do that by adding new non-optional fields)

Of course, they do have the standard service problems then of (a) downgrading 
if optional fields are omitted and (b) maintaining semantics over time. They 
just have that at a bigger scale than the rest of us.

the 2.4/2.5 switch showed the trouble of using code from a company capable of 
doing a whole-stack rebuild overnight. They can update a dependency 
(protobuf.jar, guava.jar) and have it picked up in the binaries. We don't have 
that luxury.

> 
> I do think there are some potential issues for our users of bumping
> the library version in a minor Hadoop release.  Until we implement
> full dependency isolation for Hadoop, there may be some disruptions to
> end-users from changing Java dependency versions.  Similarly, users
> will need to install a new native protobuf library version as well.
> So I think we should bump the protobuf versions in Hadoop 3.0, but not
> in 2.x.

+1, though I do fear the more things we put off until "3.0", the bigger that 
switch and so the harder the adoption.

FWIW, one area I do find hard with protobuf is trying to set message fields 
through reflection. That is, I want code that will link against, say, the 
Hadoop 2.6 binaries, but if there are the extra fields for a 2.7 message, to 
use them. Deep down in the internals, protobuf should let me do this -but not 
at the java API level.

Reply via email to