any comments???

2011/4/28 baran cakici <barancak...@gmail.com>

> Hi Everyone,
>
> I have a Cluster with one Master(JobTracker and NameNode - Intel Core2Duo 2
> GB Ram) and four Slaves(Datanode and Tasktracker - Celeron 2 GB Ram). My
> Inputdata are between 2GB-10GB and I read Inputdata in MapReduce line by
> line. Now, I try to accelerate my System(Benchmark), but I'm not sure, if my
> Configuration is correctly. Can you please just look, if it is ok?
>
> -mapred-site.xml
>
> <property>
> <name>mapred.job.tracker</name>
> <value>apple:9001</value>
> </property>
>
> <property>
> <name>mapred.child.java.opts</name>
> <value>-Xmx512m -server</value>
> </property>
>
> <property>
> <name>mapred.job.tracker.handler.count</name>
> <value>2</value>
> </property>
>
> <property>
> <name>mapred.local.dir</name>
> <value>/cygwin/usr/local/hadoop-datastore/hadoop-Baran/mapred/local</value>
> </property>
>
> <property>
> <name>mapred.map.tasks</name>
> <value>1</value>
> </property>
>
> <property>
> <name>mapred.reduce.tasks</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapred.submit.replication</name>
> <value>2</value>
> </property>
>
> <property>
> <name>mapred.system.dir</name>
>
> <value>/cygwin/usr/local/hadoop-datastore/hadoop-Baran/mapred/system</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.indexcache.mb</name>
> <value>10</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.map.tasks.maximum</name>
> <value>1</value>
> </property>
>
> <property>
> <name>mapred.tasktracker.reduce.tasks.maximum</name>
> <value>4</value>
> </property>
>
> <property>
> <name>mapred.temp.dir</name>
> <value>/cygwin/usr/local/hadoop-datastore/hadoop-Baran/mapred/temp</value>
> </property>
>
> <property>
> <name>webinterface.private.actions</name>
> <value>true</value>
> </property>
>
> <property>
> <name>mapred.reduce.slowstart.completed.maps</name>
> <value>0.01</value>
> </property>
>
> -hdfs-site.xml
>
> <property>
> <name>dfs.block.size</name>
> <value>268435456</value>
> </property>
> PS: I extended dfs.block.size, because I won 50% better performance with
> this change.
>
> I am waiting for your comments...
>
> Regards,
>
> Baran
>

Reply via email to