On Tue, Sep 24, 2013 at 11:57 AM, Josh Elser <josh.el...@gmail.com> wrote:

> I'm curious to hear what people think on this.
>
> I'm a really big fan of spinning up a minicluster instance to do some
> "more real" testing of software as I write it.
>
> With 1.5.0, it's a bit more painful because I have to add a bunch more
> dependencies to my project (which previously would only have to depend
> on the accumulo-minicluster artifact). The list includes, but is
> likely not limited to, commons-io, commons-configuration,
> hadoop-client, zookeeper, log4j, slf4j-api, slf4j-log4j12.
>
> Best as I understand it, the intent of this was that Hadoop will
> typically provide these artifacts at runtime, and therefore Accumulo
> doesn't need to re-bundle them itself which I'd agree with (not
> getting into that whole issue about the Hadoop "ecosystem"). However,
> I would think that the minicluster should have non-provided scope
> dependencies declared on these, as there is no Hadoop installation --
>

Would this require declaring dependencies on a particular version of hadoop
in the minicluster pom?  Or could the minicluster pom have profiles for
different hadoop versions?  I do not know enough about maven to know if you
can use profiles declared in a dependency (e.g. if a user depends on
minicluster, can they activate profiles in it?)


> there's just the minicluster. As such, this would alleviate users from
> having to dig into our dependency management or trial&error to figure
> out what "extra" dependencies they have to include in their project to
> actually make it work
>
> Thoughts?
>
> - Josh
>

Reply via email to