As long as the SBT build doesn't start depending on some new functionality
that doesn't have an easy analog in Maven, the canonical build being done
only via SBT doesn't make too much difference to me.  Regardless, I'm going
to need to continue to support customized builds that fit into my
Maven-ized environment.  The way things work currently, I need at some
point to examine every change to SparkBuild.scala and to the POM files to
make sure that they are still in sync and that I have picked up all the
appropriate changes into my builds.  If there are no more POM files to look
at in the future (other than the SBT-generated ones), that actually
simplifies my job in some respect (only one place to look for changes), as
long as the translation from changed-Apache-SBT to changed-ClearStory-POM
remains fairly obvious -- that's my basic requirement, as I said
previously.


On Fri, Feb 21, 2014 at 9:36 AM, Patrick Wendell <pwend...@gmail.com> wrote:

> Hey Everyone,
>
> We are going to publish artifacts to maven central in the exact same
> format no matter which build system we use.
>
> For normal consumers of Spark {maven vs sbt} won't make a difference.
> It will make a difference for people who are extended the Spark build
> to do their own packaging. This is what I'm trying to gauge - does
> anyone do this in a way where they feel only maven or only sbt
> supports their particular issue.
>
> - Patrick
>
> On Fri, Feb 21, 2014 at 12:40 AM, Pascal Voitot Dev
> <pascal.voitot....@gmail.com> wrote:
> > Hi,
> >
> > My small contrib to the discussion.
> > SBT is able to publish Maven artifacts generating the POM and all JAR &
> > signed files.
> > So even if not in the project, a Pom can be found somewhere.
> >
> > Pascal
> >
> >
> >
> > On Fri, Feb 21, 2014 at 9:28 AM, Paul Brown <p...@mult.ifario.us> wrote:
> >
> >> As a customer of the code, I don't care *how* the code gets built, but
> it
> >> is important to me that the Maven artifacts (POM files, binaries,
> sources,
> >> javadocs) are clean, accurate, up to date, and published on Maven
> Central.
> >>
> >> Some examples where structure/publishing failures have been bad for
> users:
> >>
> >> - For a long time (and perhaps still), Solr and Lucene were built by an
> Ant
> >> build that produced incorrect POMs and required potential developers to
> >> manually configure their IDEs.
> >>
> >> - For a long time (and perhaps still), Pig was built by Ant, published
> >> incorrect POMs, and failed to publish useful auxiliary artifacts like
> >> PigUnit and the PiggyBank as Maven-addressable artifacts.  (That said,
> >> thanks to Spark, we no longer use Pig...)
> >>
> >> - For a long time (and perhaps still), Cassandra depended on
> >> non-generally-available libraries (high-scale, etc.) that made it
> >> inconvenient to embed Cassandra in a larger system.  Cassandra gets a
> >> little slack because the build/structure was almost too terrible to
> look at
> >> prior to incubation and it's gotten better...
> >>
> >> And those are just a few projects at Apache that come to mind; I could
> make
> >> a longish list of offenders.
> >>
> >> btw, among other things that the Spark project probably *should* do
> would
> >> be to publish artifacts with a classifier to distinguish the Hadoop
> version
> >> linked against.
> >>
> >> I'll be a happy user of sbt-built artifacts, or if the project
> goes/sticks
> >> with Maven I'm more than willing to help answer questions or provide PRs
> >> for stickier items around assemblies, multiple artifacts, etc.
> >>
> >>
> >> --
> >> p...@mult.ifario.us | Multifarious, Inc. | http://mult.ifario.us/
> >>
> >>
> >> On Thu, Feb 20, 2014 at 11:56 PM, Sean Owen <so...@cloudera.com> wrote:
> >>
> >> > Two builds is indeed a pain, since it's an ongoing chore to keep them
> >> > in sync. For example, I am already seeing that the two do not quite
> >> > declare the same dependencies (see recent patch).
> >> >
> >> > I think publishing artifacts to Maven central should be considered a
> >> > hard requirement if it isn't already one from the ASF, and it may be?
> >> > Certainly most people out there would be shocked if you told them
> >> > Spark is not in the repo at all. And that requires at least
> >> > maintaining a pom that declares the structure of the project.
> >> >
> >> > This does not necessarily mean using Maven to build, but is a reason
> >> > that removing the pom is going to make this a lot harder for people to
> >> > consume as a project.
> >> >
> >> > Maven has its pros and cons but there are plenty of people lurking
> >> > around who know it quite well. Certainly it's easier for the Hadoop
> >> > people to understand and work with. On the other hand, it supports
> >> > Scala although only via a plugin, which is weaker support. sbt seems
> >> > like a fairly new, basic, ad-hoc tool. Is there an advantage to it,
> >> > other than being Scala (which is an advantage)?
> >> >
> >> > --
> >> > Sean Owen | Director, Data Science | London
> >> >
> >> >
> >> > On Fri, Feb 21, 2014 at 4:03 AM, Patrick Wendell <pwend...@gmail.com>
> >> > wrote:
> >> > > Hey All,
> >> > >
> >> > > It's very high overhead having two build systems in Spark. Before
> >> > > getting into a long discussion about the merits of sbt vs maven, I
> >> > > wanted to pose a simple question to the dev list:
> >> > >
> >> > > Is there anyone who feels that dropping either sbt or maven would
> have
> >> > > a major consequence for them?
> >> > >
> >> > > And I say "major consequence" meaning something becomes completely
> >> > > impossible now and can't be worked around. This is different from an
> >> > > "inconvenience", i.e., something which can be worked around but will
> >> > > require some investment.
> >> > >
> >> > > I'm posing the question in this way because, if there are features
> in
> >> > > either build system that are absolutely-un-available in the other,
> >> > > then we'll have to maintain both for the time being. I'm merely
> trying
> >> > > to see whether this is the case...
> >> > >
> >> > > - Patrick
> >> >
> >>
>

Reply via email to