Sandy:
Thanks that helps a great deal. I am now at least getting to the point that the jobs show up in the job tracker. However, they all
fail on initialization with the good old:
java.io.FileNotFoundException: File 
/tmp/hadoop-mapred/mapred/staging/hdfs/.staging/job_201302211213_0055/job.jar 
does not exist
This tells me that maven is either not specifying that the giraph-core jar file should be used as the job jar or I am missing
something else in the set up.

Attached is the job.xml file from one of the failed jobs and below is the relevant profile out of my pom.xml.
I did upgrade to CDH4.1.3 just to see if that would help.
Also, I have been running all sorts of jobs (benchmarks, and other tests) against this cluster for some time so I know that the cluster
works well.

Again, any help is appreciated.

Relevant section of pom.xml:
    <profile>
      <id>hadoop_cdh4.1.3mr1</id>
      <properties>
<hadoopmr1.version>2.0.0-mr1-cdh4.1.3</hadoopmr1.version>
<hadoop.version>2.0.0-cdh4.1.3</hadoop.version>
<munge.symbols>HADOOP_1_SECURITY,HADOOP_1_SECRET_MANAGER</munge.symbols>
      </properties>
      <dependencies>
        <!-- sorted lexicographically -->
        <dependency>
          <groupId>commons-net</groupId>
          <artifactId>commons-net</artifactId>
        </dependency>
        <dependency>
          <groupId>org.apache.hadoop</groupId>
          <artifactId>hadoop-client</artifactId>
          <version>${hadoopmr1.version}</version>
          <scope>provided</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.hadoop</groupId>
          <artifactId>hadoop-common</artifactId>
          <version>${hadoop.version}</version>
          <scope>provided</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.hadoop</groupId>
          <artifactId>hadoop-hdfs</artifactId>
          <version>${hadoop.version}</version>
          <scope>provided</scope>
        </dependency>
        <dependency>
          <groupId>org.apache.hadoop</groupId>
          <artifactId>hadoop-test</artifactId>
          <version>${hadoopmr1.version}</version>
          <scope>provided</scope>
        </dependency>
      </dependencies>
    </profile>



On 2/25/2013 12:47 PM, Sandy Ryza wrote:
Hi David,

Moving this to cdh-user, as it is CDH-specific.

CDH4 comes with two versions of mapreduce, MR1, and MR2. It sounds like you are building against MR2 (http://blog.cloudera.com/blog/2012/10/mr2-and-yarn-briefly-explained/). Do you know whether your cluster runs MR2/YARN or MR1? If it runs, MR2, you can set mapreduce.framework.name <http://mapreduce.framework.name> to "yarn". If it runs MR1, you can build against the MR1 jar by setting the version of your hadoop-client to 2.0.0-mr1-cdh4.1.1. (https://ccp.cloudera.com/display/CDH4DOC/Managing+Hadoop+API+Dependencies+in+CDH4)
Does that help?

-Sandy


On Mon, Feb 25, 2013 at 8:26 AM, David Boyd <db...@data-tactics-corp.com <mailto:db...@data-tactics-corp.com>> wrote:

    All:
       I am trying to get the Giraph 0.2 snapshot (pulled via GIT on
    Friday)
    to build and run with CDH4.

    I modified the pom.xml to provide a profile for my specific
    version (4.1.1).
    The build works (mvn -Phadoop_cdh4.1.1 clean package test) and passes
    all the tests.

    If I try to do the next step and submit to my cluster with the
    command:
    mvn -Phadoop_cdh4.1.1 test
    -Dprop.mapred.job.tracker=10.1.94.53:8021 <http://10.1.94.53:8021>
    -Dgiraph.zkList=10.1.94.104:2181 <http://10.1.94.104:2181>

     the JSON test in core fails.  If I move that test out of the way
    a whole bunch of tests in examples
    fail.  They all fail with:

        java.io.IOException: Cannot initialize Cluster. Please check your
        configuration for mapreduce.framework.name
        <http://mapreduce.framework.name> and the correspond server
        addresses.


    I have tried passing mapreduce.framework.name
    <http://mapreduce.framework.name> as both local and classic.   I
    have also set those values in my mapreduce-site.xml.

    Interestingly I can run the pagerank benchmark in code with the
    command:

        hadoop jar
        
./giraph-core/target/giraph-0.2-SNAPSHOT-for-hadoop-2.0.0-cdh4.1.3-jar-with-dependencies.jar
        org.apache.giraph.benchmark.PageRankBenchmark
        -Dmapred.child.java-opts="-Xmx64g -Xms64g XX:+UseConcMarkSweepGC
        -XX:-UseGCOverheadLimit" -Dgiraph.zkList=10.1.94.104:2181
        <http://10.1.94.104:2181> -e 1 -s 3 -v
        -V 50000 -w 83

    And it completes just fine.

    I have searched high and low for documents and examples on how to
    run the example programs from other
    than maven but have not found any thing.

    Any help or suggestions  would be greatly appreciated.

    THanks.



-- ========= mailto:db...@data-tactics.com
    <mailto:db...@data-tactics.com> ============
    David W. Boyd
    Director, Engineering, Research and Development
    Data Tactics Corporation
    7901 Jones Branch, Suite 240
    Mclean, VA 22102
    office: +1-703-506-3735, ext 308
    <tel:%2B1-703-506-3735%2C%20ext%20308>
    fax: +1-703-506-6703 <tel:%2B1-703-506-6703>
    cell: +1-703-402-7908 <tel:%2B1-703-402-7908>
    ============== http://www.data-tactics.com/ ============

    The information contained in this message may be privileged
    and/or confidential and protected from disclosure.
    If the reader of this message is not the intended recipient
    or an employee or agent responsible for delivering this message
    to the intended recipient, you are hereby notified that any
    dissemination, distribution or copying of this communication
    is strictly prohibited.  If you have received this communication
    in error, please notify the sender immediately by replying to
    this message and deleting the material from any computer.





--
========= mailto:db...@data-tactics.com ============
David W. Boyd
Director, Engineering, Research and Development
Data Tactics Corporation
7901 Jones Branch, Suite 240
Mclean, VA 22102
office:   +1-703-506-3735, ext 308
fax:     +1-703-506-6703
cell:     +1-703-402-7908
============== http://www.data-tactics.com/ ============
The information contained in this message may be privileged
and/or confidential and protected from disclosure.
If the reader of this message is not the intended recipient
or an employee or agent responsible for delivering this message
to the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this communication
is strictly prohibited.  If you have received this communication
in error, please notify the sender immediately by replying to
this message and deleting the material from any computer.





  
<html>
<head>
<title>Job Configuration: JobId - job_201302211213_0056</title>
<link rel="stylesheet" type="text/css" href="/static/hadoop.css">
<link rel="icon" type="image/vnd.microsoft.icon" href="/static/images/favicon.ico" />
</head>
<body>
<h2>Job Configuration: JobId - job_201302211213_0056</h2><br>

<table border="1" align="center" class="datatable">
<thead>
<tr>
<th>name</th><th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td width="35%"><b>job.end.retry.interval</b></td><td width="65%">30000</td>
</tr>
<tr>
<td width="35%"><b>giraph.vertexClass</b></td><td width="65%">org.apache.giraph.benchmark.EdgeListVertexPageRankBenchmark</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.retiredjobs.cache.size</b></td><td width="65%">1000</td>
</tr>
<tr>
<td width="35%"><b>mapred.queue.default.acl-administer-jobs</b></td><td width="65%">*</td>
</tr>
<tr>
<td width="35%"><b>dfs.image.transfer.bandwidthPerSec</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.profile.reduces</b></td><td width="65%">0-2</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.jobtracker.staging.root.dir</b></td><td width="65%">${hadoop.tmp.dir}/mapred/staging</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.reuse.jvm.num.tasks</b></td><td width="65%">1</td>
</tr>
<tr>
<td width="35%"><b>dfs.block.access.token.lifetime</b></td><td width="65%">600</td>
</tr>
<tr>
<td width="35%"><b>pseudoRandomVertexInputFormat.aggregateVertices</b></td><td width="65%">101</td>
</tr>
<tr>
<td width="35%"><b>fs.AbstractFileSystem.file.impl</b></td><td width="65%">org.apache.hadoop.fs.local.LocalFs</td>
</tr>
<tr>
<td width="35%"><b>mapred.reduce.tasks.speculative.execution</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>GeneratedVertexReader.reader_vertices</b></td><td width="65%">5</td>
</tr>
<tr>
<td width="35%"><b>hadoop.ssl.keystores.factory.class</b></td><td width="65%">org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.name</b></td><td width="65%">testContinue</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.authentication.kerberos.keytab</b></td><td width="65%">${user.home}/hadoop.keytab</td>
</tr>
<tr>
<td width="35%"><b>io.seqfile.sorter.recordlimit</b></td><td width="65%">1000000</td>
</tr>
<tr>
<td width="35%"><b>giraph.minPercentResponded</b></td><td width="65%">100.0</td>
</tr>
<tr>
<td width="35%"><b>s3.blocksize</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>hadoop.relaxed.worker.version.check</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.num.checkpoints.retained</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.tracker.http.address</b></td><td width="65%">0.0.0.0:50060</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.delegation.token.renew-interval</b></td><td width="65%">86400000</td>
</tr>
<tr>
<td width="35%"><b>io.map.index.interval</b></td><td width="65%">128</td>
</tr>
<tr>
<td width="35%"><b>s3.client-write-packet-size</b></td><td width="65%">65536</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.http-address</b></td><td width="65%">0.0.0.0:50070</td>
</tr>
<tr>
<td width="35%"><b>giraph.minWorkers</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>ha.zookeeper.session-timeout.ms</b></td><td width="65%">5000</td>
</tr>
<tr>
<td width="35%"><b>mapred.system.dir</b></td><td width="65%">${hadoop.tmp.dir}/mapred/system</td>
</tr>
<tr>
<td width="35%"><b>hadoop.hdfs.configuration.version</b></td><td width="65%">1</td>
</tr>
<tr>
<td width="35%"><b>s3.replication</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.balance.bandwidthPerSec</b></td><td width="65%">1048576</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.tracker.report.address</b></td><td width="65%">127.0.0.1:0</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.reduce.shuffle.connect.timeout</b></td><td width="65%">180000</td>
</tr>
<tr>
<td width="35%"><b>dfs.journalnode.rpc-address</b></td><td width="65%">0.0.0.0:8485</td>
</tr>
<tr>
<td width="35%"><b>hadoop.ssl.enabled</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.readahead.bytes</b></td><td width="65%">4193404</td>
</tr>
<tr>
<td width="35%"><b>ipc.client.connect.max.retries.on.timeouts</b></td><td width="65%">45</td>
</tr>
<tr>
<td width="35%"><b>mapred.healthChecker.interval</b></td><td width="65%">60000</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.complete.cancel.delegation.tokens</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.failover.max.attempts</b></td><td width="65%">15</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.checkpoint.dir</b></td><td width="65%">file://${hadoop.tmp.dir}/dfs/namesecondary</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.replication.work.multiplier.per.iteration</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>fs.trash.interval</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>hadoop.jetty.logs.serve.aliases</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapred.skip.map.auto.incr.proc.count</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.authentication.kerberos.principal</b></td><td width="65%">HTTP/_HOST@LOCALHOST</td>
</tr>
<tr>
<td width="35%"><b>giraph.maxWorkers</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>s3native.blocksize</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>mapred.child.tmp</b></td><td width="65%">./tmp</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.taskmemorymanager.monitoring-interval</b></td><td width="65%">5000</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.edits.dir</b></td><td width="65%">${dfs.namenode.name.dir}</td>
</tr>
<tr>
<td width="35%"><b>dfs.encrypt.data.transfer</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.http.address</b></td><td width="65%">0.0.0.0:50075</td>
</tr>
<tr>
<td width="35%"><b>io.sort.spill.percent</b></td><td width="65%">0.80</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.use.datanode.hostname</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.shuffle.input.buffer.percent</b></td><td width="65%">0.70</td>
</tr>
<tr>
<td width="35%"><b>hadoop.skip.worker.version.check</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.instrumentation.requires.admin</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapred.skip.map.max.skip.records</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.reduce.shuffle.maxfetchfailures</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.authorization</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>user.name</b></td><td width="65%">hdfs</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.failover.connection.retries.on.timeouts</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.group.mapping.ldap.search.filter.group</b></td><td width="65%">(objectClass=group)</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.safemode.extension</b></td><td width="65%">30000</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.profile.maps</b></td><td width="65%">0-2</td>
</tr>
<tr>
<td width="35%"><b>ipc.ping.interval</b></td><td width="65%">300000</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.sync.behind.writes</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.https.server.keystore.resource</b></td><td width="65%">ssl-server.xml</td>
</tr>
<tr>
<td width="35%"><b>mapred.local.dir</b></td><td width="65%">${hadoop.tmp.dir}/mapred/local</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.group.mapping.ldap.search.attr.group.name</b></td><td width="65%">cn</td>
</tr>
<tr>
<td width="35%"><b>mapred.merge.recordsBeforeProgress</b></td><td width="65%">10000</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.http.address</b></td><td width="65%">0.0.0.0:50030</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.replication.min</b></td><td width="65%">1</td>
</tr>
<tr>
<td width="35%"><b>giraph.zkManagerDirectory</b></td><td width="65%">/tmp/_giraphTests/_defaultZkManagerDir</td>
</tr>
<tr>
<td width="35%"><b>mapred.compress.map.output</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapred.userlog.retain.hours</b></td><td width="65%">24</td>
</tr>
<tr>
<td width="35%"><b>s3native.bytes-per-checksum</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>tfile.fs.output.buffer.size</b></td><td width="65%">262144</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.reduce.tasks.maximum</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>fs.AbstractFileSystem.hdfs.impl</b></td><td width="65%">org.apache.hadoop.fs.Hdfs</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.safemode.min.datanodes</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.disk.healthChecker.interval</b></td><td width="65%">60000</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.https.need-auth</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.https.keystore.resource</b></td><td width="65%">ssl-client.xml</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.max.objects</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.cluster.map.memory.mb</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>hadoop.ssl.client.conf</b></td><td width="65%">ssl-client.xml</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.safemode.threshold-pct</b></td><td width="65%">0.999f</td>
</tr>
<tr>
<td width="35%"><b>dfs.blocksize</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.submithost</b></td><td width="65%">xd-analysis.xdata.data-tactics-corp.com</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.user.classpath.first</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.tasktracker.outofband.heartbeat</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>io.native.lib.available</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapred.jobtracker.restart.recover</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.client-write-packet-size</b></td><td width="65%">65536</td>
</tr>
<tr>
<td width="35%"><b>mapred.reduce.child.log.level</b></td><td width="65%">INFO</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.shuffle.ssl.address</b></td><td width="65%">0.0.0.0</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.name.dir</b></td><td width="65%">file://${hadoop.tmp.dir}/dfs/name</td>
</tr>
<tr>
<td width="35%"><b>dfs.ha.log-roll.period</b></td><td width="65%">120</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.failover.sleep.base.millis</b></td><td width="65%">500</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.directoryscan.threads</b></td><td width="65%">1</td>
</tr>
<tr>
<td width="35%"><b>giraph.zkServerlistPollMsecs</b></td><td width="65%">500</td>
</tr>
<tr>
<td width="35%"><b>dfs.permissions.enabled</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>dfs.support.append</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapred.inmem.merge.threshold</b></td><td width="65%">1000</td>
</tr>
<tr>
<td width="35%"><b>ipc.client.connection.maxidletime</b></td><td width="65%">10000</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.shuffle.ssl.enabled</b></td><td width="65%">${hadoop.ssl.enabled}</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.invalidate.work.pct.per.iteration</b></td><td width="65%">0.32f</td>
</tr>
<tr>
<td width="35%"><b>dfs.blockreport.intervalMsec</b></td><td width="65%">21600000</td>
</tr>
<tr>
<td width="35%"><b>fs.s3.sleepTimeSeconds</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.replication.considerLoad</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.block.write.retries</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>hadoop.ssl.server.conf</b></td><td width="65%">ssl-server.xml</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.name.dir.restore</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.hdfs-blocks-metadata.enabled</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapred.reduce.tasks</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>ha.zookeeper.parent-znode</b></td><td width="65%">/hadoop-ha</td>
</tr>
<tr>
<td width="35%"><b>mapred.queue.names</b></td><td width="65%">default</td>
</tr>
<tr>
<td width="35%"><b>io.seqfile.lazydecompress</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>dfs.https.enable</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>giraph.zkDir</b></td><td width="65%">/tmp/_giraphTests/_bspZooKeeper</td>
</tr>
<tr>
<td width="35%"><b>dfs.replication</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>ipc.client.tcpnodelay</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.accesstime.precision</b></td><td width="65%">3600000</td>
</tr>
<tr>
<td width="35%"><b>mapred.acls.enabled</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>s3.stream-buffer-size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.dns.nameserver</b></td><td width="65%">default</td>
</tr>
<tr>
<td width="35%"><b>mapred.submit.replication</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>io.file.buffer.size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>mapred.map.tasks.speculative.execution</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.checkpoint.txns</b></td><td width="65%">40000</td>
</tr>
<tr>
<td width="35%"><b>pseudoRandomVertexInputFormat.edgesPerVertex</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>mapred.map.child.log.level</b></td><td width="65%">INFO</td>
</tr>
<tr>
<td width="35%"><b>kfs.replication</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB</b></td><td width="65%">org.apache.hadoop.ipc.ProtobufRpcEngine</td>
</tr>
<tr>
<td width="35%"><b>mapred.map.max.attempts</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>dfs.ha.tail-edits.period</b></td><td width="65%">60</td>
</tr>
<tr>
<td width="35%"><b>kfs.stream-buffer-size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.shuffle.merge.percent</b></td><td width="65%">0.66</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.authentication</b></td><td width="65%">simple</td>
</tr>
<tr>
<td width="35%"><b>fs.s3.buffer.dir</b></td><td width="65%">${hadoop.tmp.dir}/s3</td>
</tr>
<tr>
<td width="35%"><b>mapred.skip.reduce.auto.incr.proc.count</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.jobhistory.lru.cache.size</b></td><td width="65%">5</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.outputformat.class</b></td><td width="65%">org.apache.giraph.bsp.BspOutputFormat</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.file-block-storage-locations.timeout</b></td><td width="65%">60</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.drop.cache.behind.writes</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>tfile.fs.input.buffer.size</b></td><td width="65%">262144</td>
</tr>
<tr>
<td width="35%"><b>dfs.block.access.token.enable</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.journalnode.http-address</b></td><td width="65%">0.0.0.0:8480</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.acl-view-job</b></td><td width="65%"> </td>
</tr>
<tr>
<td width="35%"><b>mapred.job.queue.name</b></td><td width="65%">default</td>
</tr>
<tr>
<td width="35%"><b>ftp.blocksize</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.data.dir</b></td><td width="65%">file://${hadoop.tmp.dir}/dfs/data</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.persist.jobstatus.hours</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.tasktracker.cache.local.numberdirectories</b></td><td width="65%">10000</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.replication.interval</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.https-address</b></td><td width="65%">0.0.0.0:50470</td>
</tr>
<tr>
<td width="35%"><b>dfs.ha.automatic-failover.enabled</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>ipc.client.kill.max</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>mapred.healthChecker.script.timeout</b></td><td width="65%">600000</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.map.tasks.maximum</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.failover.sleep.max.millis</b></td><td width="65%">15000</td>
</tr>
<tr>
<td width="35%"><b>jobclient.completion.poll.interval</b></td><td width="65%">5000</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.persist.jobstatus.dir</b></td><td width="65%">/jobtracker/jobsInfo</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.shuffle.ssl.port</b></td><td width="65%">50443</td>
</tr>
<tr>
<td width="35%"><b>dfs.default.chunk.view.size</b></td><td width="65%">32768</td>
</tr>
<tr>
<td width="35%"><b>kfs.bytes-per-checksum</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>mapred.reduce.slowstart.completed.maps</b></td><td width="65%">0.05</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.filter.initializers</b></td><td width="65%">org.apache.hadoop.http.lib.StaticUserWebFilter</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.failed.volumes.tolerated</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>io.sort.mb</b></td><td width="65%">100</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.authentication.type</b></td><td width="65%">simple</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.inputformat.class</b></td><td width="65%">org.apache.giraph.bsp.BspInputFormat</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.data.dir.perm</b></td><td width="65%">700</td>
</tr>
<tr>
<td width="35%"><b>ipc.server.listen.queue.size</b></td><td width="65%">128</td>
</tr>
<tr>
<td width="35%"><b>file.stream-buffer-size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.fs-limits.max-directory-items</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>io.mapfile.bloom.size</b></td><td width="65%">1048576</td>
</tr>
<tr>
<td width="35%"><b>ftp.replication</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.dns.nameserver</b></td><td width="65%">default</td>
</tr>
<tr>
<td width="35%"><b>mapred.child.java.opts</b></td><td width="65%">-Xmx200m</td>
</tr>
<tr>
<td width="35%"><b>dfs.replication.max</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>mapred.queue.default.state</b></td><td width="65%">RUNNING</td>
</tr>
<tr>
<td width="35%"><b>map.sort.class</b></td><td width="65%">org.apache.hadoop.util.QuickSort</td>
</tr>
<tr>
<td width="35%"><b>dfs.stream-buffer-size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.backup.address</b></td><td width="65%">0.0.0.0:50100</td>
</tr>
<tr>
<td width="35%"><b>mapred.jobtracker.instrumentation</b></td><td width="65%">org.apache.hadoop.mapred.JobTrackerMetricsInst</td>
</tr>
<tr>
<td width="35%"><b>hadoop.util.hash.type</b></td><td width="65%">murmur</td>
</tr>
<tr>
<td width="35%"><b>dfs.block.access.key.update.interval</b></td><td width="65%">600</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.use.datanode.hostname</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.dns.interface</b></td><td width="65%">default</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.backup.http-address</b></td><td width="65%">0.0.0.0:50105</td>
</tr>
<tr>
<td width="35%"><b>mapred.output.compression.type</b></td><td width="65%">RECORD</td>
</tr>
<tr>
<td width="35%"><b>mapred.skip.attempts.to.start.skipping</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>kfs.client-write-packet-size</b></td><td width="65%">65536</td>
</tr>
<tr>
<td width="35%"><b>ha.zookeeper.acl</b></td><td width="65%">world:anyone:rwcda</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.dir</b></td><td width="65%">hdfs://xd-gp-nn:8020/tmp/hadoop-mapred/mapred/staging/hdfs/.staging/job_201302211213_0056</td>
</tr>
<tr>
<td width="35%"><b>io.map.index.skip</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>net.topology.node.switch.mapping.impl</b></td><td width="65%">org.apache.hadoop.net.ScriptBasedMapping</td>
</tr>
<tr>
<td width="35%"><b>mapred.cluster.max.map.memory.mb</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>fs.s3.maxRetries</b></td><td width="65%">4</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.logging.level</b></td><td width="65%">info</td>
</tr>
<tr>
<td width="35%"><b>s3native.client-write-packet-size</b></td><td width="65%">65536</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.tracker.task-controller</b></td><td width="65%">org.apache.hadoop.mapred.DefaultTaskController</td>
</tr>
<tr>
<td width="35%"><b>mapred.userlog.limit.kb</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.staticuser.user</b></td><td width="65%">dr.who</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.ifile.readahead.bytes</b></td><td width="65%">4194304</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.user.classpath.first</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.authentication.simple.anonymous.allowed</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>hadoop.fuse.timer.period</b></td><td width="65%">5</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.num.extra.edits.retained</b></td><td width="65%">1000000</td>
</tr>
<tr>
<td width="35%"><b>hadoop.rpc.socket.factory.class.default</b></td><td width="65%">org.apache.hadoop.net.StandardSocketFactory</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.handler.count</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>fs.automatic.close</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.submithostaddress</b></td><td width="65%">10.1.90.38</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.directoryscan.interval</b></td><td width="65%">21600</td>
</tr>
<tr>
<td width="35%"><b>mapred.map.tasks</b></td><td width="65%">4</td>
</tr>
<tr>
<td width="35%"><b>mapred.local.dir.minspacekill</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.map.memory.mb</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>mapred.jobtracker.completeuserjobs.maximum</b></td><td width="65%">100</td>
</tr>
<tr>
<td width="35%"><b>PageRankBenchmark.superstepCount</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.jobtracker.split.metainfo.maxsize</b></td><td width="65%">10000000</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.file-block-storage-locations.num-threads</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>jobclient.progress.monitor.poll.interval</b></td><td width="65%">1000</td>
</tr>
<tr>
<td width="35%"><b>dfs.bytes-per-checksum</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>ftp.stream-buffer-size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.group.mapping.ldap.search.attr.member</b></td><td width="65%">member</td>
</tr>
<tr>
<td width="35%"><b>dfs.blockreport.initialDelay</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.min.split.size</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.authentication.token.validity</b></td><td width="65%">36000</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.delegation.token.max-lifetime</b></td><td width="65%">604800000</td>
</tr>
<tr>
<td width="35%"><b>mapred.output.compression.codec</b></td><td width="65%">org.apache.hadoop.io.compress.DefaultCodec</td>
</tr>
<tr>
<td width="35%"><b>giraph.vertexInputFormatClass</b></td><td width="65%">org.apache.giraph.io.formats.PseudoRandomVertexInputFormat</td>
</tr>
<tr>
<td width="35%"><b>mapred.cluster.max.reduce.memory.mb</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>mapred.cluster.reduce.memory.mb</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>s3native.replication</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.profile</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapred.reduce.parallel.copies</b></td><td width="65%">5</td>
</tr>
<tr>
<td width="35%"><b>dfs.heartbeat.interval</b></td><td width="65%">3</td>
</tr>
<tr>
<td width="35%"><b>dfs.ha.fencing.ssh.connect-timeout</b></td><td width="65%">30000</td>
</tr>
<tr>
<td width="35%"><b>local.cache.size</b></td><td width="65%">10737418240</td>
</tr>
<tr>
<td width="35%"><b>io.sort.factor</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.map.class</b></td><td width="65%">org.apache.giraph.graph.GraphMapper</td>
</tr>
<tr>
<td width="35%"><b>kfs.blocksize</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.timeout</b></td><td width="65%">600000</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.secondary.http-address</b></td><td width="65%">0.0.0.0:50090</td>
</tr>
<tr>
<td width="35%"><b>ipc.client.idlethreshold</b></td><td width="65%">4000</td>
</tr>
<tr>
<td width="35%"><b>ipc.server.tcpnodelay</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>ftp.bytes-per-checksum</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>mapred.output.dir</b></td><td width="65%">/tmp/_giraphTests/testContinue</td>
</tr>
<tr>
<td width="35%"><b>group.name</b></td><td width="65%">hdfs</td>
</tr>
<tr>
<td width="35%"><b>s3.bytes-per-checksum</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>mapred.heartbeats.in.second</b></td><td width="65%">100</td>
</tr>
<tr>
<td width="35%"><b>fs.s3.block.size</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.failover.connection.retries</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.map.output.compression.codec</b></td><td width="65%">org.apache.hadoop.io.compress.DefaultCodec</td>
</tr>
<tr>
<td width="35%"><b>hadoop.rpc.protection</b></td><td width="65%">authentication</td>
</tr>
<tr>
<td width="35%"><b>mapred.task.cache.levels</b></td><td width="65%">2</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.dns.interface</b></td><td width="65%">default</td>
</tr>
<tr>
<td width="35%"><b>dfs.secondary.namenode.kerberos.internal.spnego.principal</b></td><td width="65%">${dfs.web.authentication.kerberos.principal}</td>
</tr>
<tr>
<td width="35%"><b>ftp.client-write-packet-size</b></td><td width="65%">65536</td>
</tr>
<tr>
<td width="35%"><b>fs.defaultFS</b></td><td width="65%">file:///</td>
</tr>
<tr>
<td width="35%"><b>file.client-write-packet-size</b></td><td width="65%">65536</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.reduce.memory.mb</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>mapred.max.tracker.failures</b></td><td width="65%">4</td>
</tr>
<tr>
<td width="35%"><b>fs.trash.checkpoint.interval</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>hadoop.http.authentication.signature.secret.file</b></td><td width="65%">${user.home}/hadoop-http-auth-signature-secret</td>
</tr>
<tr>
<td width="35%"><b>s3native.stream-buffer-size</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>giraph.checkpointDirectory</b></td><td width="65%">/tmp/_giraphTests/_checkpoints</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.reduce.shuffle.read.timeout</b></td><td width="65%">180000</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.tasks.sleeptime-before-sigkill</b></td><td width="65%">5000</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.checkpoint.edits.dir</b></td><td width="65%">${dfs.namenode.checkpoint.dir}</td>
</tr>
<tr>
<td width="35%"><b>giraph.maxMasterSuperstepWaitMsecs</b></td><td width="65%">30000</td>
</tr>
<tr>
<td width="35%"><b>mapred.max.tracker.blacklists</b></td><td width="65%">4</td>
</tr>
<tr>
<td width="35%"><b>giraph.vertexOutputFormatClass</b></td><td width="65%">org.apache.giraph.io.formats.JsonBase64VertexOutputFormat</td>
</tr>
<tr>
<td width="35%"><b>hadoop.common.configuration.version</b></td><td width="65%">0.23.0</td>
</tr>
<tr>
<td width="35%"><b>jobclient.output.filter</b></td><td width="65%">FAILED</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.group.mapping.ldap.ssl</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.ifile.readahead</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>io.serializations</b></td><td width="65%">org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization</td>
</tr>
<tr>
<td width="35%"><b>fs.df.interval</b></td><td width="65%">60000</td>
</tr>
<tr>
<td width="35%"><b>io.seqfile.compress.blocksize</b></td><td width="65%">1000000</td>
</tr>
<tr>
<td width="35%"><b>mapred.jobtracker.taskScheduler</b></td><td width="65%">org.apache.hadoop.mapred.JobQueueTaskScheduler</td>
</tr>
<tr>
<td width="35%"><b>job.end.retry.attempts</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>ipc.client.connect.max.retries</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.groups.cache.secs</b></td><td width="65%">300</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.delegation.key.update-interval</b></td><td width="65%">86400000</td>
</tr>
<tr>
<td width="35%"><b>giraph.eventWaitMsecs</b></td><td width="65%">3000</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.indexcache.mb</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.group.mapping.ldap.search.filter.user</b></td><td width="65%">(&amp;(objectClass=user)(sAMAccountName={0}))</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.reduce.input.limit</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>dfs.image.compress</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapred.mapper.new-api</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>tasktracker.http.threads</b></td><td width="65%">40</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.kerberos.internal.spnego.principal</b></td><td width="65%">${dfs.web.authentication.kerberos.principal}</td>
</tr>
<tr>
<td width="35%"><b>fs.s3n.block.size</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.handler.count</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>fs.ftp.host</b></td><td width="65%">0.0.0.0</td>
</tr>
<tr>
<td width="35%"><b>keep.failed.task.files</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>mapred.output.compress</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>hadoop.security.group.mapping</b></td><td width="65%">org.apache.hadoop.security.ShellBasedUnixGroupsMapping</td>
</tr>
<tr>
<td width="35%"><b>mapred.jobtracker.job.history.block.size</b></td><td width="65%">3145728</td>
</tr>
<tr>
<td width="35%"><b>mapred.skip.reduce.max.skip.groups</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.address</b></td><td width="65%">0.0.0.0:50010</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.https.address</b></td><td width="65%">0.0.0.0:50475</td>
</tr>
<tr>
<td width="35%"><b>file.replication</b></td><td width="65%">1</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.drop.cache.behind.reads</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>hadoop.fuse.connection.timeout</b></td><td width="65%">300</td>
</tr>
<tr>
<td width="35%"><b>mapred.jar</b></td><td width="65%">/tmp/hadoop-mapred/mapred/staging/hdfs/.staging/job_201302211213_0056/job.jar</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.jobtracker.restart.recover</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>hadoop.work.around.non.threadsafe.getpwuid</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>hadoop.tmp.dir</b></td><td width="65%">/tmp/hadoop-${user.name}</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.block.write.replace-datanode-on-failure.policy</b></td><td width="65%">DEFAULT</td>
</tr>
<tr>
<td width="35%"><b>mapred.line.input.format.linespermap</b></td><td width="65%">1</td>
</tr>
<tr>
<td width="35%"><b>hadoop.kerberos.kinit.command</b></td><td width="65%">kinit</td>
</tr>
<tr>
<td width="35%"><b>dfs.webhdfs.enabled</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.du.reserved</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>file.bytes-per-checksum</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>mapred.local.dir.minspacestart</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>mapred.jobtracker.maxtasks.per.job</b></td><td width="65%">-1</td>
</tr>
<tr>
<td width="35%"><b>dfs.client.block.write.replace-datanode-on-failure.enable</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>mapred.user.jobconf.limit</b></td><td width="65%">5242880</td>
</tr>
<tr>
<td width="35%"><b>mapred.reduce.max.attempts</b></td><td width="65%">4</td>
</tr>
<tr>
<td width="35%"><b>net.topology.script.number.args</b></td><td width="65%">100</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker</b></td><td width="65%">10.1.94.53:8021</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.decommission.interval</b></td><td width="65%">30</td>
</tr>
<tr>
<td width="35%"><b>dfs.image.compression.codec</b></td><td width="65%">org.apache.hadoop.io.compress.DefaultCodec</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.support.allow.format</b></td><td width="65%">true</td>
</tr>
<tr>
<td width="35%"><b>hadoop.ssl.hostname.verifier</b></td><td width="65%">DEFAULT</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.instrumentation</b></td><td width="65%">org.apache.hadoop.mapred.TaskTrackerMetricsInst</td>
</tr>
<tr>
<td width="35%"><b>io.mapfile.bloom.error.rate</b></td><td width="65%">0.005</td>
</tr>
<tr>
<td width="35%"><b>dfs.permissions.superusergroup</b></td><td width="65%">supergroup</td>
</tr>
<tr>
<td width="35%"><b>mapred.tasktracker.expiry.interval</b></td><td width="65%">600000</td>
</tr>
<tr>
<td width="35%"><b>io.sort.record.percent</b></td><td width="65%">0.05</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.tracker.persist.jobstatus.active</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>io.seqfile.local.dir</b></td><td width="65%">${hadoop.tmp.dir}/io/local</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.checkpoint.check.period</b></td><td width="65%">60</td>
</tr>
<tr>
<td width="35%"><b>tfile.io.chunk.size</b></td><td width="65%">1048576</td>
</tr>
<tr>
<td width="35%"><b>file.blocksize</b></td><td width="65%">67108864</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.acl-modify-job</b></td><td width="65%"> </td>
</tr>
<tr>
<td width="35%"><b>io.skip.checksum.errors</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.edits.journal-plugin.qjournal</b></td><td width="65%">org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.handler.count</b></td><td width="65%">10</td>
</tr>
<tr>
<td width="35%"><b>fs.ftp.host.port</b></td><td width="65%">21</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.decommission.nodes.per.interval</b></td><td width="65%">5</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.fs-limits.max-component-length</b></td><td width="65%">0</td>
</tr>
<tr>
<td width="35%"><b>dfs.namenode.checkpoint.period</b></td><td width="65%">3600</td>
</tr>
<tr>
<td width="35%"><b>fs.AbstractFileSystem.viewfs.impl</b></td><td width="65%">org.apache.hadoop.fs.viewfs.ViewFs</td>
</tr>
<tr>
<td width="35%"><b>mapred.temp.dir</b></td><td width="65%">${hadoop.tmp.dir}/mapred/temp</td>
</tr>
<tr>
<td width="35%"><b>mapreduce.job.counters.limit</b></td><td width="65%">512</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.ipc.address</b></td><td width="65%">0.0.0.0:50020</td>
</tr>
<tr>
<td width="35%"><b>mapred.working.dir</b></td><td width="65%">file:/data/hdfs/Giraph/giraph/giraph-core</td>
</tr>
<tr>
<td width="35%"><b>hadoop.ssl.require.client.cert</b></td><td width="65%">false</td>
</tr>
<tr>
<td width="35%"><b>dfs.datanode.max.transfer.threads</b></td><td width="65%">4096</td>
</tr>
<tr>
<td width="35%"><b>mapred.job.reduce.input.buffer.percent</b></td><td width="65%">0.0</td>
</tr>
</tbody>
</table>


<br>
<hr />
<a href='http://hadoop.apache.org/core'>Hadoop</a>, 2013.
</body></html>

Reply via email to