reply: how do I know the datanode slave status

2008-12-11 Thread koven
You can press http://localhost(namenode):50030 in you IE or Firefox And find the cluster status koven welcome to www.taobao.com China [EMAIL PROTECTED] --- Hi, Is it possible to findout the status of the datanode

Re: Re: Re: File Splits in Hadoop

2008-12-11 Thread amitsingh
Hi , After some debugging it seems that i got the problem. In parseARC(...) length for bzip was incorrectly set. (last bzip in the split) if (totalRead splitEnd) { break; } ... it should have ideally allowed to read from next block instead of setting length to offset-size for

Re: When I system.out.println() in a map or reduce, where does it go?

2008-12-11 Thread Tom White
You can also see the logs from the web UI (http://jobtracker:50030 by default), by clicking through to the map or reduce task that you are interested in and looking at the page for task attempts. Tom On Wed, Dec 10, 2008 at 10:41 PM, Tarandeep Singh [EMAIL PROTECTED] wrote: you can see the

-libjars with multiple jars broken when client and cluster reside on different OSs?

2008-12-11 Thread Stuart White
I've written a simple map/reduce job that demonstrates a problem I'm having. Please see attached example. Environment: hadoop 0.19.0 cluster resides across linux nodes client resides on cygwin To recreate the problem I'm seeing, do the following: - Setup a hadoop cluster on linux -

Re: question: NameNode hanging on startup as it intends to leave safe mode

2008-12-11 Thread Karl Kleinpaste
On Wed, 2008-12-10 at 11:52 -0800, Konstantin Shvachko wrote: This is probably related to HADOOP-4795. Thanx for the observation and reference. However, my sense is that the bug report you reference reflects NameNode going into an infloop spin, whereas the situation we have faced concerns

API Documentation question - WritableComparable

2008-12-11 Thread Andy Sautins
I have a question regarding the Hadoop API documentation for .19. The question is in regard to: http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/io/Writ ableComparable.html. The document shows the following for the compareTo method: public int

5 node Hadoop Cluster!!! Some Doubts...

2008-12-11 Thread Siddharth Malhotra
Hey, I am student and working on a project using Hadoop. I have successfully implemented the project on single node and Pseudo Distributed Mode. I would now be implementing it on a 5 node cluster but I wanted know if there would be any specific way I should setup the Tasktraker, Jobtracker

Re: Libhdfs / fuse_dfs crashing

2008-12-11 Thread Brian Bockelman
Ah-ha, I found it! Anyone see the problem? If you want to cheat, you can look at the ticket... size_t num_read; size_t total_read = 0; while (size - total_read 0 (num_read = hdfsPread(fh-fs, fh- hdfsFH, offset + total_read, buf + total_read, size - total_read)) 0) {

Re: API Documentation question - WritableComparable

2008-12-11 Thread Tarandeep Singh
The example is just to illustrate how one should implement one's own WritableComparable class and in the compreTo method, it is just showing how it works in case of IntWritable with value as its member variable. You are right the example's code is misleading. It should have used either timestamp

Re: JDBC input/output format

2008-12-11 Thread Edward J. Yoon
Does anyone think about database to database migration using hadoop? On Tue, Dec 9, 2008 at 4:07 AM, Alex Loddengaard a...@cloudera.com wrote: Here are some useful links with regard to reading from and writing to MySQL: http://issues.apache.org/jira/browse/HADOOP-2536

Gzip compressed input?

2008-12-11 Thread Delip Rao
I am having trouble reading gzip compressed input. Is this a known problem? Any workarounds? (I am using gzip 1.3.3 ) Thanks, Delip $ hadoop dfs -ls input Found 1 items -rw-r--r-- 3 huser supergroup 17532230 2008-12-11 23:52 /user/huser/input/words.gz $ hadoop jar hadoop-0.19.0-examples.jar

problem in inputSplit

2008-12-11 Thread ZhiHong Fu
Hello, Now I have encountered a very werid problem in custom split, in which I define a IndexDirSplit cotaining a list of Index Directory Path, I implemented like this: package zju.edu.tcmsearch.lucene.search.format; import java.io.IOException; import java.io.DataInput; import