In modern projects there are a bazillion dependencies - when I use Hadoop I
just put them in a lib directory in the jar - If I have a project that
depends on 50 jars I need a way to deliver them to Spark - maybe wordcount
can be written without dependencies but real projects need to deliver
dependencies to the cluster

On Wed, Sep 10, 2014 at 11:44 PM, Sean Owen <so...@cloudera.com> wrote:

> Hm, so it is:
> http://docs.oracle.com/javase/tutorial/deployment/jar/downman.html
>
> I'm sure I've done this before though and thought is was this mechanism.
> It must be something custom.
>
> What's the Hadoop jar structure in question then? Is it something special
> like a WAR file? I confess I had never heard of this so thought this was
> about generic JAR stuff.
>
> Is the question about a lib dir in the Hadoop home dir?
> On Sep 10, 2014 11:34 PM, "Marcelo Vanzin" <van...@cloudera.com> wrote:
>
>> On Mon, Sep 8, 2014 at 11:15 PM, Sean Owen <so...@cloudera.com> wrote:
>> > This structure is not specific to Hadoop, but in theory works in any
>> > JAR file. You can put JARs in JARs and refer to them with Class-Path
>> > entries in META-INF/MANIFEST.MF.
>>
>> Funny that you mention that, since someone internally asked the same
>> question, and I spend some time looking at it.
>>
>> That's not actually how Class-Path works in the manifest. You can't
>> have jars inside other jars; the Class-Path items reference things in
>> the filesystem itself. So that solution doesn't work.
>>
>> It would be nice to add the feature Steve is talking about, though.
>>
>> --
>> Marcelo
>>
>


-- 
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com

Reply via email to