Hello Alex.
Please do not cross-post to all the lists (it causes confusion, and
triggers multiple conversations -- no good to any of us). This
discussion is suitable at common-user@.
On Tue, Apr 12, 2011 at 8:15 AM, Alex Luya alexander.l...@gmail.com wrote:
BUILD FAILED
.../branch-0
Hello,
I am new to hadoop.
I am using hadoop 0.20.2 on ubuntu.
I recently installed and configured hadoop using the available tutorials on
internet.
My hadoop is running properly.
But Whenever I am trying to run a wordcount example, the wordcount program
got stuck at the reduce part. After long
page 312 of Tom White's Hadoop: The Definitive Guide mentions that the
Offline Image Viewer supplied with 0.21.0 can be used to test the integrity
of any backups taken from the Secondary Namenode (previous.checkpoint)
directory.
How does this work in practice?
I've tested the tool on a valid
Hi,
Can please tell me a way to pass global vlaues(like matrix size or block
sizes ) to mappers and reducers.Till now i am using hbase to pass values or
modifying them, i think it is takin more time for that, it will be a great
help for me, if you tell me a different approach for that.I just
Hello Praveenesh,
On Thu, Apr 14, 2011 at 3:42 PM, praveenesh kumar praveen...@gmail.com wrote:
attempt_201104142306_0001_m_00_0, Status : FAILED
Too many fetch-failures
11/04/14 23:32:50 WARN mapred.JobClient: Error reading task outputInvalid
argument or cannot assign requested address
Hello Siva,
Please avoid cross-posting to multiple lists.
On Thu, Apr 14, 2011 at 4:11 PM, siva prakash
sivaprakashcs07b...@gmail.com wrote:
Hi,
Can please tell me a way to pass global vlaues(like matrix size or block
sizes ) to mappers and reducers.Till now i am using hbase to pass values
Hi,
From where I can see the logs ?
I have done single node cluster installaiton and I am running hadoop on
single machine only. Both Map and Reduce are running on same machine.
Thanks,
Praveenesh
On Thu, Apr 14, 2011 at 4:43 PM, Harsh J ha...@cloudera.com wrote:
Hello Praveenesh,
On Thu, Apr
James,
If I understand you get a set of immutable attributes, then a state which can
change.
If you wanted to use HBase...
I'd say create a unique identifier for your immutable attributes, then store
the unique id, timestamp, and state. Assuming
that you're really interested in looking at
If all the seigel/seigal/segel gang don't chime in It'd be weird.
What size of data are we talking?
James
On 2011-04-14, at 11:06 AM, Michael Segel michael_se...@hotmail.com wrote:
James,
If I understand you get a set of immutable attributes, then a state which can
change.
If
Hi guys,
This is a program which used to work but I probably have changed something
and now it is taking me a lot of time to figure out how the problem is caused
. In mapper :
map-function:
--
...
tempSeq = new
Hi all,
a tricky problem here. When we prepare an input path, it should be a path on
HDFS by default, right? In what condition will this become a path on local file
system? I follow a program which worked well and the input path is something
like hdfs:// But when I apply the similar driver
Hi,
I have a question that is it possible to install hdfs on physical machines
and running mapreduce on virtual machine. if possible, the performance is
acceptable or not for about 10G/hour data.
Thanks a lot.
--
Changbin Wang
Tel: 0046765825533
Network and Distributed System of Chalmers
Sweden
2011/4/15 changbin.wang owen2...@gmail.com
Hi,
I have a question that is it possible to install hdfs on physical machines
and running mapreduce on virtual machine.
Possible
if possible, the performance is
acceptable or not for about 10G/hour data.
It will have some lose in performance
Seems like something is setting fs.default.name programmatically.
Another possibility that $HADOOP_CONF_DIR isn't in the classpath in
the second case.
Hope it helps,
Cos
On Thu, Apr 14, 2011 at 20:24, Gang Luo lgpub...@yahoo.com.cn wrote:
Hi all,
a tricky problem here. When we prepare an
14 matches
Mail list logo