[ https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13750383#comment-13750383 ]
Elliott Clark commented on HBASE-9338: -------------------------------------- Configs on the cluster: core-site.xml {code} <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://${MASTER_HOSTNAME}:8020/</value> </property> <property> <name>hadoop.data.dir0</name> <value>/data0/</value> </property> <property> <name>hadoop.data.dir1</name> <value>/data1/</value> </property> <property> <name>hadoop.data.dir2</name> <value>/data2/</value> </property> <property> <name>hadoop.data.dir3</name> <value>/data3/</value> </property> <property> <name>hadoop.data.dir4</name> <value>/data4/</value> </property> <property> <name>hadoop.data.dir5</name> <value>/data5/</value> </property> <property> <name>mapred.temp.dir</name> <value>${hadoop.data.dir1}/mapred/temp</value> <description>A shared directory for temporary files. </description> </property> <property> <name>ipc.client.connect.timeout</name> <value>1000</value> </property> <property> <name>ipc.client.connect.max.retries.on.timeouts</name> <value>2</value> </property> <property> <name>dfs.socket.timeout</name> <value>5000</value> </property> <property> <name>dfs.socket.write.timeout</name> <value>5000</value> </property> <property> <name>ipc.ping.interval</name> <value>20000</value> </property> <property> <name>io.file.buffer.size</name> <value>65536</value> </property> <property> <name>ipc.client.connect.max.retries</name> <value>10</value> </property> <property> <name>ipc.client.tcpnodelay</name> <value>true</value> </property> <property> <name>ipc.server.tcpnodelay</name> <value>true</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/var/lib/hadoop/dn_socket._PORT</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>true</value> </property> <property> <name>dfs.client.file-block-storage-locations.timeout</name> <value>3000</value> </property> <property> <name>dfs.client.read.shortcircuit.skip.checksum</name> <value>true</value> </property> </configuration> {code} hdfs-site.xml {code} <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.datanode.handler.count</name> <!-- default 10 --> <value>32</value> <description>The number of server threads for the datanode.</description> </property> <property> <name>dfs.namenode.handler.count</name> <!-- default 10 --> <value>32</value> <description>The number of server threads for the namenode.</description> </property> <property> <name>dfs.block.size</name> <value>134217728</value> <description>The default block size for new files.</description> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4098</value> </property> <property> <name>dfs.namenode.replication.interval</name> <value>15</value> </property> <property> <name>dfs.balance.bandwidthPerSec</name> <value>10485760</value> </property> <property> <name>fs.checkpoint.dir</name> <value>${hadoop.data.dir1}/dfs/namesecondary</value> </property> <property> <name>dfs.name.dir</name> <value>${hadoop.data.dir0}/dfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>${hadoop.data.dir0}/dfs/data,${hadoop.data.dir1}/dfs/data,${hadoop.data.dir2}/dfs/data,${hadoop.data.dir3}/dfs/data,${hadoop.data.dir4}/dfs/data,${hadoop.data.dir5}/dfs/data</value> </property> <property> <name>dfs.datanode.socket.write.timeout</name> <value>10000</value> </property> <property> <name>ipc.client.connect.timeout</name> <value>1000</value> </property> <property> <name>ipc.client.connect.max.retries.on.timeouts</name> <value>2</value> </property> <property> <name>dfs.socket.timeout</name> <value>5000</value> </property> <property> <name>dfs.socket.write.timeout</name> <value>5000</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/var/lib/hadoop/dn_socket._PORT</value> </property> <property> <name>dfs.block.local-path-access.user</name> <value>hbase</value> </property> <property> <name>dfs.client.read.shortcircuit.skip.checksum</name> <value>true</value> </property> <property> <name>dfs.client.file-block-storage-locations.timeout</name> <value>3000</value> </property> </configuration> {code} yarn-site.xml: {code} <?xml version="1.0"?> <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>${MASTER_HOSTNAME}:10040</value> <description>In Server specified the port that Resource Manager will runn on. In client is used for connecting to Resource Manager</description> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>${MASTER_HOSTNAME}:8025</value> <description>Utilized by Node Manager for communication with Resource Manager</description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>${MASTER_HOSTNAME}:8030</value> <description>Utilized by Application Masters to communicate with Resource Manager; in our case for MRAppMaster (MapReduce Application Master) to communicate with Resource Manager</description> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>${MASTER_HOSTNAME}:8141</value> <description>Utilized by administrative clients ($yarn rmadmin) to communicate with Resource Manager</description> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>${hadoop.data.dir0}/mapred/nodemanager,${hadoop.data.dir1}/mapred/nodemanager,${hadoop.data.dir2}/mapred/nodemanager,${hadoop.data.dir3}/mapred/nodemanager,${hadoop.data.dir4}/mapred/nodemanager,${hadoop.data.dir5}/mapred/nodemanager</value> <final>true</final> <description>Comma separated list of directories, where local data is persisted by Node Manager</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> <description>Long running service which executes on Node Manager(s) and provides MapReduce Sort and Shuffle functionality.</description> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> <description>Enable log aggregation so application logs are moved onto hdfs and are viewable via web ui after the application completed. The default location on hdfs is '/log' and can be changed via yarn.nodemanager.remote-app-log-dir property</description> </property> <property> <name>hadoop.security.authorization</name> <value>false</value> <description>Disable authorization fo development and clusters that do not require security</description> </property> <property> <description>Amount of physical memory, in MB, that can be allocated for containers.</description> <name>yarn.nodemanager.resource.memory-mb</name> <value>9000</value> </property> </configuration> {code} hbase-site.xml: {code} <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.zookeeper.quorum</name> <value>${MASTER_HOSTNAME}</value> <description>The directory shared by RegionServers. </description> </property> <property> <name>hbase.rootdir</name> <value>hdfs://${MASTER_HOSTNAME}:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>dfs.client.read.shortcircuit</name> <value>true</value> </property> <property> <name>dfs.block.local-path-access.user</name> <value>hbase</value> </property> <property> <name>dfs.domain.socket.path</name> <value>/var/lib/hadoop/dn_socket._PORT</value> </property> <property> <name>dfs.client.read.shortcircuit.skip.checksum</name> <value>true</value> </property> <property> <name>hbase.IntegrationTestDataIngestSlowDeterministic.runtime</name> <value>3600000</value> </property> </configuration> {code} > Test Big Linked List fails on Hadoop 2.1.0 > ------------------------------------------ > > Key: HBASE-9338 > URL: https://issues.apache.org/jira/browse/HBASE-9338 > Project: HBase > Issue Type: Bug > Components: test > Affects Versions: 0.96.0 > Reporter: Elliott Clark > Assignee: Elliott Clark > Priority: Critical > Fix For: 0.96.0 > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira