You can press http://localhost(namenode):50030 in you IE or Firefox
And find the cluster status
koven
welcome to www.taobao.com China
[EMAIL PROTECTED]
---
Hi,
Is it possible to findout the status of the datanode
Hi ,
After some debugging it seems that i got the problem.
In parseARC(...) length for bzip was incorrectly set. (last bzip in the
split)
if (totalRead splitEnd) {
break;
} ... it should have ideally allowed to read from next block instead of
setting length to offset-size for
You can also see the logs from the web UI (http://jobtracker:50030
by default), by clicking through to the map or reduce task that you
are interested in and looking at the page for task attempts.
Tom
On Wed, Dec 10, 2008 at 10:41 PM, Tarandeep Singh [EMAIL PROTECTED] wrote:
you can see the
I've written a simple map/reduce job that demonstrates a problem I'm
having. Please see attached example.
Environment:
hadoop 0.19.0
cluster resides across linux nodes
client resides on cygwin
To recreate the problem I'm seeing, do the following:
- Setup a hadoop cluster on linux
-
On Wed, 2008-12-10 at 11:52 -0800, Konstantin Shvachko wrote:
This is probably related to HADOOP-4795.
Thanx for the observation and reference. However, my sense is that the
bug report you reference reflects NameNode going into an infloop spin,
whereas the situation we have faced concerns
I have a question regarding the Hadoop API documentation for .19. The
question is in regard to:
http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/io/Writ
ableComparable.html. The document shows the following for the compareTo
method:
public int
Hey,
I am student and working on a project using Hadoop. I have successfully
implemented the project on single node and Pseudo Distributed Mode.
I would now be implementing it on a 5 node cluster but I wanted know if
there would be any specific way I should setup the Tasktraker, Jobtracker
Ah-ha, I found it!
Anyone see the problem? If you want to cheat, you can look at the
ticket...
size_t num_read;
size_t total_read = 0;
while (size - total_read 0 (num_read = hdfsPread(fh-fs, fh-
hdfsFH, offset + total_read, buf + total_read, size - total_read))
0) {
The example is just to illustrate how one should implement one's own
WritableComparable class and in the compreTo method, it is just showing how
it works in case of IntWritable with value as its member variable.
You are right the example's code is misleading. It should have used either
timestamp
Does anyone think about database to database migration using hadoop?
On Tue, Dec 9, 2008 at 4:07 AM, Alex Loddengaard a...@cloudera.com wrote:
Here are some useful links with regard to reading from and writing to MySQL:
http://issues.apache.org/jira/browse/HADOOP-2536
I am having trouble reading gzip compressed input. Is this a known
problem? Any workarounds?
(I am using gzip 1.3.3 )
Thanks,
Delip
$ hadoop dfs -ls input
Found 1 items
-rw-r--r-- 3 huser supergroup 17532230 2008-12-11 23:52
/user/huser/input/words.gz
$ hadoop jar hadoop-0.19.0-examples.jar
Hello,
Now I have encountered a very werid problem in custom split, in which I
define a IndexDirSplit cotaining a list of Index Directory Path,
I implemented like this:
package zju.edu.tcmsearch.lucene.search.format;
import java.io.IOException;
import java.io.DataInput;
import
12 matches
Mail list logo