I think my hdfs may be sick but when I run jobs on out 8 node cluster I have
started seeing
11/10/26 15:42:30 WARN mapred.JobClient: Error reading task outputhttp://
glados6.systemsbiology.net:50060/tasklog?plaintext=true&taskid=attempt_201110261134_0009_m_01_2&filter=stdout
11/10/26 15:42:30
Hi Joey. I actually migrated from CDH3u0 to EMR a while back due to
stability issues that turned out to be completely AMI/AKI-related, so I may
consider the migration back at some point. If so, I'll definitely give Whirr
a shot. Thanks!
Kai Ju
On Wed, Oct 26, 2011 at 3:40 PM, Joey Echeverria wro
You can also check out Apache Whirr (http://whirr.apache.org/) if you
decide to roll your own Hadoop clusters on EC2. It's crazy easy to get
a cluster up and running with it.
-Joey
On Wed, Oct 26, 2011 at 3:04 PM, Kai Ju Liu wrote:
> Hi Arun. Thanks for the prompt reply! It's a bit of a bummer t
Hi Arun. Thanks for the prompt reply! It's a bit of a bummer to hear that,
but I'll definitely look into the upgrade path. Thanks again!
Kai Ju
On Wed, Oct 26, 2011 at 3:01 PM, Arun C Murthy wrote:
> Sorry. This mostly won't work... we have significant changes in the
> interface between the Job
Sorry. This mostly won't work... we have significant changes in the interface
between the JobTracker and schedulers (FS/CS) b/w 20.2 and 20.203 (performance,
better limits etc.).
Your best bet might be to provision Hadoop yourself on EC2 with 0.20.203+.
Good luck!
Arun
On Oct 26, 2011, at 2:5
Hi. I'm currently running a Hadoop cluster on Amazon's EMR service, which
appears to be the 0.20.2 codebase plus several patches from the
(deprecated?) 0.20.3 branch. I'm interested in switching from using the fair
scheduler to the capacity scheduler, but I'm also interested in the
user-limit-facto
Moving this discussion to CDH-USER since it sounds like it's the Cloudera VM.
BCC mapreduce-user
On Wed, Oct 26, 2011 at 2:17 AM, Stephen Boesch wrote:
> I found a suggestion to reformat the namenode. In order to do so, I found
> it necessary to set the dir to 777. AFter
>
> $ sudo chmod 777 /v
I found a suggestion to reformat the namenode. In order to do so, I found
it necessary to set the dir to 777. AFter
$ sudo chmod 777 /var/lib/hadoop-0.20/cache/hadoop/dfs/name
$ ./hadoop namenode -format
(successful)
$ ./hadoop-daemon.sh --config $HADOOP/conf start namenode
(success!)
So..