Where are you looking for the logs?
They will be available in Tasklogs. You can view them from web ui from
taskdetails.jsp page.
-Amareshwari
On 4/27/10 2:22 PM, Alexander Semenov bohtva...@gmail.com wrote:
Hi all.
I'm not sure if I'm posting to correct mail list, please, suggest the
correct
you should use the log4j rather than the apache common logging
On Tue, Apr 27, 2010 at 4:52 PM, Alexander Semenov bohtva...@gmail.com wrote:
Hi all.
I'm not sure if I'm posting to correct mail list, please, suggest the
correct one if so.
I need to log statements from the running job, e.g.
I'm expecting to see the logs on the console since the root logger is
configured to do so.
On Tue, 2010-04-27 at 14:28 +0530, Amareshwari Sri Ramadasu wrote:
Where are you looking for the logs?
They will be available in Tasklogs. You can view them from web ui from
taskdetails.jsp page.
Alexander Semenov wrote:
Ok, thanks. Unfortunately ant is currently not installed on machine
running hadoop. What if I use slf4j + logback just in job's jar?
depends on the classpath. You can do it in your own code by getting
whatever classloader you use then call getResource(log4.properties)
hi all,
I'm using contrib/index for text indexing, can i have mulitpul reducer for
index writing? For example documents from the same type falls into the same
reduce node.
2010-04-27
ni_jiangfeng
Hi guys,
I see the exception below when I launch a job
0/04/27 10:54:16 INFO mapred.JobClient: map 0% reduce 0%
10/04/27 10:54:22 INFO mapred.JobClient: Task Id :
attempt_201004271050_0001_m_005760_0, Status : FAILED
Error initializing attempt_201004271050_0001_m_005760_0:
Hi Vishal,
What operating system are you on? The TT is having issues parsing the output
of df
-Todd
On Tue, Apr 27, 2010 at 9:03 AM, vishalsant vishal.santo...@gmail.comwrote:
Hi guys,
I see the exception below when I launch a job
0/04/27 10:54:16 INFO mapred.JobClient: map 0% reduce
It seems that this piece of code , does a df to get the amount of free space
( got this info from the irc channel )
And it is trying to do a Number conversion on information returned by df
/Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda21891213200
Thanks!
2010/4/26 Amareshwari Sri Ramadasu amar...@yahoo-inc.com
context.getTaskAttemptID() gives the task attempt id and
context,getTaskAttemptID().getTaskID() gives the task id of the reducer.
Context.getTaskAttemptID().getTaskID().getId() gives the reducer number.
Thanks
Amareshwari
Hi Alexander,
Where are you looking for the logs? The output of the tasks should be in
$HADOOP_LOG_DIR/userlogs/attempt*/{stdout,stderr,syslog}.
Could you provide the exact java command line your tasks are running with
(do 'ps -ef | grep Child' on one of the nodes when the job is running).
My name is Larry Mills and I am conducting a search for two Hadoop/Java
developers in San Diego or Los Angeles. Please see below job description.
Hadoop/Java Developers Needed
San Diego or Los Angeles, California
Position Summary
We are seeking two JAVA/Hadoop developers who will be as
Hello,
I want to output a class which I have written as the value of the map phase.
The obvious was is to implement the Writable interface but the problem is
the class has other classes as its member properties. The DataInput and
DataOutput interfaces used by the read and write methods of the
Take a look at the sample given in Javadoc of Writable.java
You need to serialize your data yourself:
@Override
public void readFields(DataInput in) throws IOException {
h = Text.readString(in);
sc = in.readFloat();
ran = in.readInt ();
}
On Tue, Apr 27, 2010 at
Not sure , what is happening here .. in the sense that is this critical?
I had read that the status of a task is passed on to the jobtracker over
http.
Is that true ?
I see tasks killed b'coz of expiree , even though the Datanode seems to be
alive and kicking ( expect for the above exception )..
Can I use the Serializable interface? Alternatively, is there any way to
specify OutputFormatter for mappers like we can do for reducers?
Thanks,
Farhan
On Tue, Apr 27, 2010 at 1:19 PM, Ted Yu yuzhih...@gmail.com wrote:
Take a look at the sample given in Javadoc of Writable.java
You need to
Hi,
I've decided to refactor some of my Hadoop jobs and implement them
using MultithreadedMapper.class but I got puzzled because of some
unexpected error messages at run time.
Here are some relevant settings regarding my Hadoop cluster:
mapred.tasktracker.map.tasks.maximum = 1
I tried to use a class which implements the Serializable interface and got
the following error:
java.lang.NullPointerException
at
org.apache.hadoop.io.serializer.SerializationFactory.getSerializer(SerializationFactory.java:73)
at
Hello,
Is it possible to output in Mapper.cleanup method since the Mapper.context
object is still available there?
Thanks,
Farhan
Yes. It's a common pattern to buffer some amount of data in the map()
method, flushing every N records and then to flush any remaining
records in the cleanup() method.
On Tue, Apr 27, 2010 at 6:57 PM, Farhan Husain
farhan.hus...@csebuet.org wrote:
Hello,
Is it possible to output in
Thanks Eric.
On Tue, Apr 27, 2010 at 6:19 PM, Eric Sammer esam...@cloudera.com wrote:
Yes. It's a common pattern to buffer some amount of data in the map()
method, flushing every N records and then to flush any remaining
records in the cleanup() method.
On Tue, Apr 27, 2010 at 6:57 PM,
Can you try adding 'org.apache.hadoop.io.serializer.JavaSerialization,' to
the following config ?
C:\hadoop-0.20.2\src\core\core-default.xml(87,9):
nameio.serializations/name
By default, only org.apache.hadoop.io.serializer.WritableSerialization is
included.
On Tue, Apr 27, 2010 at 3:55 PM,
21 matches
Mail list logo