Hey
trying to setup distrubuted conf with hadoop environment - 3 machines with
logging but have issues, after configuration each instance of hadoop logging
don't have logs file,, its just empty, no data during running process
thanks,
www.ejinz.com
+1 to Stu's assessment! Very compelling article, Tom. Thanks also for
the code that makes it possible.
--matt
On Jul 19, 2007, at 4:56 PM, Tom White wrote:
All except the link...
http://developer.amazonwebservices.com/connect/entry.jspa?
externalID=873&categoryID=112
On 19/07/07, Tom Whi
Ok, I have completed the testing and it is working good. One problem
though. I noticed that we are using a distributed cache for the job
files. If I am creating new job jar files on the fly, but still copying
them to the job.jar location, how is this affected by distributed caching?
Dennis
Fantastic article Tom! Thanks a bunch for writing it (along with the EC2
integration I might add).
A link on the AmazonEC2 wiki page would probably be a good idea.
Thanks,
Stu
-Original Message-
From: Tom White <[EMAIL PROTECTED]>
Sent: Thu, July 19, 2007 4:56 pm
To: hadoop-user@luce
All except the link...
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=873&categoryID=112
On 19/07/07, Tom White <[EMAIL PROTECTED]> wrote:
The title pretty much says it all, although I would say that it might
be of interest even if you're not using Amazon Web Services.
To
The title pretty much says it all, although I would say that it might
be of interest even if you're not using Amazon Web Services.
Tom
Not sure if I'm missing something here, but can you not just point
your web browser at :50030 . Or does
the information given there not cover what you need?
Quoting Phantom <[EMAIL PROTECTED]>:
I would like to understand how the map jobs are assigned. Intuitively it
would seem that the jobs
I would like to understand how the map jobs are assigned. Intuitively it
would seem that the jobs would be assigned to the nodes that contain the
blocks needed for the map task. However this need not be necessarily true.
Figuring where the blocks are placed would help me understand this a little
m
On Thu, Jul 19, 2007 at 08:57:42AM -0700, Phantom wrote:
>Hi All
>
>Is there a way to find out on which nodes in my cluster the Map/Reduce jobs
>are running after I submit my job ?
Short answer: No.
Is there a specific reason you need this? Maybe we can try and help you given a
more detailed des
Hi Anthony,
On Wed, Jul 18, 2007 at 07:42:58PM -0700, Anthony D. Urso wrote:
>I have started to use the following log4j xml to send logs to both the
>mapreduce tasklog and to the syslog daemon. Unfortunately, it creates
>a new log split in the tasklog for each log entry.
>
>Is this a problem with
Hi All
Is there a way to find out on which nodes in my cluster the Map/Reduce jobs
are running after I submit my job ? Also is there anyways to determine given
a file where the different blocks of the file are stored ?
Thanks
A
I'm well aware of the 2 possibilities you're proposing, but I don't
think it would fit with the existing software of the company I'm working
in. I guess I'll have to crawl among Nutch's guts to find what I'm
looking for, and export it. Once I'll have managed this, I'll try to
make the tutorial
12 matches
Mail list logo