not sure if this will help ... but i *really* like java webstart:
http://java.sun.com/products/javawebstart
if configured optimally, applications under javaws can be down right snappy:
http://weblogs.java.net/blog/gonzo/archive/2005/10/webstart_and_29.html
javaws should fetch only the needed
I'm talking about the actual JAR. Putting the dependencies on every
node doesn't seem to be a good solution since you would have to copy
everything over every time you need something new, or sync them when
there's an update. You might even have to restart the cluster because
I think the task run
Do you actually mean a directory named lib in the Job JAR or do you
mean by putting them in the lib directory where Hadoop runs? From
the looks of RunJar.java I think you mean the first option (of
course, the second option works, too)
-Grant
On Oct 30, 2006, at 6:29 AM, Vetle Roeim wrote:
#1 and your suggestion of just the dependencies seem to work for me.
I also looked into using Classworlds (which allows you to package
jars into an uberJAR as they call it and is not quite the same as #2)
but I couldn't get it working (not that I spent that much time on
it). #2 should als
On Sat, 28 Oct 2006 22:13:35 +0200, Albert Chern <[EMAIL PROTECTED]>
wrote:
I'm not sure if the first option works. If it does let me know. One of
the
developers taught me to use option 2 by creating a jar with your
dependencies in lib/. The tasktrackers will automatically include
everyth
I'm not sure if the first option works. If it does let me know. One of the
developers taught me to use option 2 by creating a jar with your
dependencies in lib/. The tasktrackers will automatically include
everything in lib/ on their classpaths.
On 10/28/06, Grant Ingersoll <[EMAIL PROTECTED]>
I'm not sure I am understanding this correctly and I don't see
anything on this in the Getting Started section, so...
It seems that when I want to run my application in distributed mode,
I should invoke the /bin/hadoop jar (or bin/hadoop
) and it will copy my JAR onto the DFS and then
di