en,thanks for you advice , i didn't pay much attention to this .it is so
important ,I think i should be careful about it.
2010/11/25 Harsh J
> Hello,
>
> 2010/11/25 祝美祺 :
> >
> > dfs.name.dir
> > /home/hadoop/tmp
> >
>
> (Nitpicking)
>
> Why set it to a directory named "tmp"? The dfs.name.d
Hello,
2010/11/25 祝美祺 :
>
> dfs.name.dir
> /home/hadoop/tmp
>
(Nitpicking)
Why set it to a directory named "tmp"? The dfs.name.dir is where the
NameNode stores all its meta data related to the data across the
DataNodes. I'd give it a good name than tmp, which gives a sense that
it isn't import
Silly mistakes...thanks for you help very much.
在 2010年11月25日 下午2:00,rahul patodi 写道:
> please correct your hdfs-site.xml contents:
>
>
>
> dfs.name.dir
> /home/hadoop/tmp
>
>
>
> dfs.data.dir
> /home/hadoop/data
>
>
> dfs.replication
> 1
>
>
>
>
> rest all things are fine
> one more
please correct your hdfs-site.xml contents:
dfs.name.dir
/home/hadoop/tmp
dfs.data.dir
/home/hadoop/data
dfs.replication
1
rest all things are fine
one more thing:it is best practice that you edit and add details of your
matsre and slave (which you can get from my blog ) in :
/etc/
thanks for your reply,here is my configuration.
master:
core-site.xml:
fs.default.name
hdfs://192.168.0.142:9000
hdfs-site.xml:
dfs.name.dir
/home/hadoop/tmp
dfs.data.dir
/home/hadoop/data
dfs.replication
1
mapred-site.xml:
mapred.job.tracker
192.168.0.142:9200
jobtracker hos
also check your other file like /etc/hosts
On Thu, Nov 25, 2010 at 10:36 AM, rahul patodi wrote:
> i think you should check your configuration file in the conf folder and add
> the required entry in
> core-site.xml, mapred-site.xml and hdfs-site.xml
> for pseudo distributed mode you can refer:
>
i think you should check your configuration file in the conf folder and add
the required entry in
core-site.xml, mapred-site.xml and hdfs-site.xml
for pseudo distributed mode you can refer:
http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html
for distributed mode yo
Hi Henning,
Thanks again.
Let me explain my scenario first so you make a better sense out of my question
. I have a web application running on glassfish server. Every 24 hours Quartz
job runs on the server and I need to call set of Hadoop jobs one after the
other, read the final output and stor
Hi Praveen,
looking at the Job configuration you will find properties like user.name
and more stuff that has created by substituting template values in
core-default.xml, mapred-default.xml (all in the hadoop jars). I suppose
on of these (if not user.name) define the user that submits. But I
haven'