Hey guys,
We are trying to figure out why many of our Map/Reduce job on the cluster are
failing.
In log we are getting this message I n the failing jobs:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File **a
filename*** could only be replicated to 0 nodes, instead of 1
check the FAQ (
http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
)
On Tue, Apr 5, 2011 at 4:53 PM, Guy Doulberg guy.doulb...@conduit.comwrote:
Hey guys,
We are trying to figure out why many of our Map/Reduce job on the cluster
Agree with Harsh,
I think you need to write your own RecordRead.
On Tue, Apr 5, 2011 at 3:37 PM, Harsh Chouraria ha...@cloudera.com wrote:
Hello Kevin,
On Fri, Mar 25, 2011 at 12:52 AM, kevin.le...@thomsonreuters.com wrote:
-Dstream.map.output.field.separator= \
Thanks,
We think the problem is,
We have unbalanced HDFS cluster, some of the data nodes are in more 90%, and
some are less than 30% - it happened because the nodes with free space are
newer.
We think that when a task tracker is getting a task, it tries to write its map
output first to its
Dear all,
I am sorry for posting second time but I found it very difficult to run
wordcount-nopipe.cc program in*
/home/hadoop/project/hadoop-0.20.2/src/examples/pipes/impl *directory in
a running Hadoop Cluster.
In the beginning I faced the below exception
bin/hadoop pipes -D
Hi all,
Can HDFS run over a RAW DISK which is mounted over a mount point with
no FIle System ? Or does it interact only with POSIX compliant File
sytem ?
thanks,
Matthew
Hi,
Is anyone know, how am I limit my MapReduce's CPU-Usage on Windows XP??
Regards,
Baran
Hi,
First of all, I use Hadoop-0.20.2 on Windows XP Pro with Eclipse Plug-In. I
have a Cluster with 1 Master (Jobtracker and Namenode) and 4 Slaves(Datanode
and TaskTracker).
I have some problems about my Hadoop-Cluster last few weeks. When I start a
job with big Input(4GB - it`s may be not to
Hi,
Is anyone know, how can I limit my MapReduce's CPU-Usage on Windows XP??
Regards,
Baran
YOu can configure the balancer to use higher bandwidth. That can speed it
up by 10x
On Tue, Apr 5, 2011 at 2:54 AM, Guy Doulberg guy.doulb...@conduit.comwrote:
We are running the blancer, but it takes a lot of time... in this time the
cluster not working
Hi,
I have a generall question about Hadoop-MapReduce. I lose sometimes my
TaskTracker, can it be happen because of high CPU usage(100%) by
TaskTracker??
Regards,
Baran
Mark,
Make sure you add the cygwin/bin to your global PATH variable in Windows
too... and echo the PATH, if you're running in a command window to make sure
it shows up there... When running through eclipse, it should pick up the
PATH variable.
Good luck. its worth the trouble. this does work.
another approach you may well have already considered.. but may reconsider..
use (free version of ..) vmware player running on your winXXX env (host)
and install linux distro of your choice as the guest o/s. you can spin up
essentially any number of instances that way.
.. and not be concerned
Guys,
I really appreciate all your answers. I am sure that I could have made
Hadoop run under Windows - all of you have done it - but I may go with
Stephen's advice after all. Since I am doing this for my open source
eDiscovery project, FreeEed https://github.com/markkerzner/FreeEed, then I
need
On Apr 5, 2011, at 5:22 AM, Matthew John wrote:
Can HDFS run over a RAW DISK which is mounted over a mount point with
no FIle System ? Or does it interact only with POSIX compliant File
sytem ?
It needs a POSIX file system.
15 matches
Mail list logo