or did i misread the problem?

another thought i had one time is that maybe hadoop would honor the
manifest classpath the same way regular java -jar does but i looked at
the code and i don't believe it did in 0.20.2.

On Sun, May 8, 2011 at 9:55 PM, Dmitriy Lyubimov <[email protected]> wrote:
> I never actually ran in this error. I guess my backend code never
> called anything ouside the jar.
>
> But i do, or, rather, did have similar problems with my project. I
> think i alreday voiced my opinion on this last time.
>
> One solution is to use shade plugin or similar technique to create one
> job jar with all dependencies in it. Which i deem as a bad bractice,
> because it unjars existing dependencies jar, and it breaks things on
> occasion (e.g. if one of dependencies is a signed jar, such as
> BouncyCastle). Projects get into using shade plugin only to require
> major retrofit when they hit dependency like this.
>
> A better and hadoop-like technique is to rework standard driver class
> so that it tosses everything assembly placed into lib into backend
> classpath explicitly using DistributedCache. Warning: this
> functionality is kind of broken in standard 0.20.2 somewhat, requires
> a hack to work.
>
> -d
>
> On Sun, May 8, 2011 at 5:09 PM, Jake Mannix <[email protected]> wrote:
>> I haven't run a post 0.4 mahout build on a production hadoop cluster before
>> this, and I'm baffled that we have a job jar which simply -does not work-.
>> Is this really not me, and our stock examples jobs are broken on hadoop?
>>
>>  -jake
>>
>> On May 8, 2011 4:14 PM, "Sean Owen" <[email protected]> wrote:
>>
>> (The build error indicates you have some old class files somewhere --
>> "clean" first)
>>
>> Here, the lib/ directory definitely has the right dependencies and it
>> still doesn't work. Benson investigated and found out it's just how
>> Hadoop works in this case.
>>
>> On Mon, May 9, 2011 at 12:06 AM, Ken Krugler <[email protected]>
>> wrote: > I haven't been ...
>>
>

Reply via email to