Yikes. So part of sqoop would wind up in one source repository, and part in
another? This makes my head hurt a bit.

I'm also not convinced how that helps. So if I write (e.g.,)
o.a.h.sqoop.HiveImporter and check that into a contrib module in the Hive
project, then the main sqoop program (o.a.h.sqoop.Sqoop) still needs to
compile against/load at runtime o.a.h.s.HiveImporter. So the net result is
the same: building/running a cohesive program requires fetching resources
from the hive repo and compiling them in.

For the moment, though, I'm finding that the Hive JDBC interface is actually
misbehaving more than I care to wrangle with. My current solution is
actually to generate script files and run them with "hive -f <tmpfilename>",
which doesn't require any compile-time linkage. So maybe this is a nonissue
for the moment.

- Aaron

On Fri, May 15, 2009 at 3:06 PM, Owen O'Malley <omal...@apache.org> wrote:

> On May 15, 2009, at 2:05 PM, Aaron Kimball wrote:
>
>  In either case, there's a dependency there.
>>
>
> You need to split it so that there are no cycles in the dependency tree. In
> the short term it looks like:
>
> avro:
> core: avro
> hdfs: core
> mapred: hdfs, core
> hive: mapred, core
> pig: mapred, core
>
> Adding a dependence from core to hive would be bad. To integrate with Hive,
> you need to add a contrib module to Hive that adds it.
>
> -- Owen
>

Reply via email to