[
https://issues.apache.org/jira/browse/HADOOP-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Nigel Daley updated HADOOP-4168:
--------------------------------
Component/s: (was: build)
test
Fix Version/s: 0.19.0
> TestInjectionForSimulatedStorage job is failing on linux
> ---------------------------------------------------------
>
> Key: HADOOP-4168
> URL: https://issues.apache.org/jira/browse/HADOOP-4168
> Project: Hadoop Core
> Issue Type: Bug
> Components: test
> Affects Versions: 0.18.1
> Reporter: Suman Sehgal
> Fix For: 0.19.0
>
>
> TestInjectionForSimulatedStorage job is failing on linux with following
> errors:
> [junit] 2008-09-13 00:58:09,676 INFO dfs.DataNode (DataNode.java:run(2858))
> - DatanodeRegistration(127.0.0.1:xxxxx,
> storageID=DS-1383424108-66.228.166.207-0-1221267471510, infoPort=xxxxx,
> ipcPort=xxxxxx):Transmitted block blk_3685375500187228851_1001 to
> /127.0.0.1:xxxxx
> [junit] 2008-09-13 00:58:09,753 INFO FSNamesystem.audit
> (FSNamesystem.java:logAuditEvent(94)) - ugi=hadoopqa,users,search,gridlogin
> ip=/127.0.0.1 cmd=open src=/replication-test-file dst=null
> perm=null
> [junit] 2008-09-13 00:58:09,755 INFO
> dfs.TestInjectionForSimulatedStorage
> (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not
> enough replicas for 2th block blk_3685375500187228851_1001 yet. Expecting 4,
> got 1.
> [junit] 2008-09-13 00:58:10,258 INFO FSNamesystem.audit
> (FSNamesystem.java:logAuditEvent(94)) - ugi=hadoopqa,users,search,gridlogin
> ip=/127.0.0.1 cmd=open src=/replication-test-file dst=null
> perm=null
> [junit] 2008-09-13 00:58:10,259 INFO
> dfs.TestInjectionForSimulatedStorage
> (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not
> enough replicas for 2th block blk_3685375500187228851_1001 yet. Expecting 4,
> got 1.
> [junit] 2008-09-13 00:58:10,763 INFO FSNamesystem.audit
> (FSNamesystem.java:logAuditEvent(94)) - ugi=hadoopqa,users,search,gridlogin
> ip=/127.0.0.1 cmd=open src=/replication-test-file dst=null
> perm=null
> [junit] 2008-09-13 00:58:10,764 INFO
> dfs.TestInjectionForSimulatedStorage
> (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not
> enough replicas for 2th block blk_3685375500187228851_1001 yet. Expecting 4,
> got 1.
> [junit] 2008-09-13 00:58:11,267 INFO FSNamesystem.audit
> (FSNamesystem.java:logAuditEvent(94)) - ugi=hadoopqa,users,search,gridlogin
> ip=/127.0.0.1 cmd=open src=/replication-test-file dst=null
> perm=null
> [junit] 2008-09-13 00:58:11,269 INFO
> dfs.TestInjectionForSimulatedStorage
> (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not
> enough replicas for 2th block blk_3685375500187228851_1001 yet. Expecting 4,
> got 1.
> [junit] 2008-09-13 00:58:11,631 INFO dfs.StateChange
> (FSNamesystem.java:computeReplicationWork(2362)) - BLOCK* ask 127.0.0.1:xxxxx
> to replicate blk_3685375500187228851_1001 to datanode(s) 127.0.0.1:xxxxx
> 127.0.0.1:54219
> [junit] 2008-09-13 00:58:11,772 INFO FSNamesystem.audit
> (FSNamesystem.java:logAuditEvent(94)) - ugi=hadoopqa,users,search,gridlogin
> ip=/127.0.0.1 cmd=open src=/replication-test-file dst=null
> perm=null
> [junit] 2008-09-13 00:58:11,773 INFO
> dfs.TestInjectionForSimulatedStorage
> (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not
> enough replicas for 2th block blk_3685375500187228851_1001 yet. Expecting 4,
> got 1.
> [junit] 2008-09-13 00:58:12,276 INFO FSNamesystem.audit
> (FSNamesystem.java:logAuditEvent(94)) - ugi=hadoopqa,users,search,gridlogin
> ip=/127.0.0.1 cmd=open src=/replication-test-file dst=null
> perm=null
> [junit] 2008-09-13 00:58:12,278 INFO
> dfs.TestInjectionForSimulatedStorage
> (TestInjectionForSimulatedStorage.java:waitForBlockReplication(89)) - Not
> enough replicas for 2th block blk_3685375500187228851_1001 yet. Expecting 4,
> got 1.
> [junit] 2008-09-13 00:58:12,674 INFO dfs.DataNode
> (DataNode.java:transferBlocks(879)) - DatanodeRegistration(127.0.0.1:xxxxx,
> storageID=DS-1383424108-66.228.166.207-0-1221267471510, infoPort=54223,
> ipcPort=xxxxx) Starting thread to transfer block blk_3685375500187228851_1001
> to 127.0.0.1:xxxxx, 127.0.0.1:54219
> [junit] 2008-09-13 00:58:12,675 INFO dfs.DataNode
> (DataNode.java:run(2858)) - DatanodeRegistration(127.0.0.1:xxxxx,
> storageID=DS-1383424108-66.228.166.207-0-1221267471510, infoPort=54223,
> ipcPort=54224):Transmitted block blk_3685375500187228851_1001 to
> /127.0.0.1:xxxxx
> [junit] 2008-09-13 00:58:12,678 INFO dfs.DataNode
> (DataNode.java:writeBlock(1156)) - Receiving block
> blk_3685375500187228851_1001 src: /127.0.0.1:54252 dest: /127.0.0.1:xxxxx
> [junit] 2008-09-13 00:58:12,678 INFO dfs.DataNode
> (DataNode.java:writeBlock(1302)) - writeBlock blk_3685375500187228851_1001
> received exception java.io.IOException: Block blk_3685375500187228851_1001 is
> valid, and cannot be written to.
> [junit] 2008-09-13 00:58:12,678 ERROR dfs.DataNode
> (DataNode.java:run(1068)) - DatanodeRegistration(127.0.0.1:xxxxx,
> storageID=DS-55333783-66.228.166.207-0-1221267471661, infoPort=xxxxx,
> ipcPort=xxxxx):DataXceiver: java.io.IOException: Block
> blk_3685375500187228851_1001 is valid, and cannot be written to.
> [junit] at
> org.apache.hadoop.dfs.SimulatedFSDataset.writeToBlock(SimulatedFSDataset.java:365)
> [junit] at
> org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:2320)
> [junit] at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1187)
> [junit] at
> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1045)
> [junit] at java.lang.Thread.run(Thread.java:619)
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.