Have you disabled the statechange log on the NN? This block has to be in
there.

Also, are you by any chance running with append enabled on unpatched 0.20?

-Todd

On Mon, May 17, 2010 at 12:40 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> That blk doesn't appear in NameNode log.
>
> For datanode,
> 2010-05-15 00:09:31,023 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_926027507678171558_3620 src: /10.32.56.170:49172 dest: /
> 10.32.56.171:50010
> 2010-05-15 00:09:31,024 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_926027507678171558_3620 received exception java.io.IOException:
> Unexpected problem in creating temporary file for
> blk_926027507678171558_3620.  File
>
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_926027507678171558
> should not be present, but is.
> 2010-05-15 00:09:31,024 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-5814095875968936685_2910 received exception java.io.IOException:
> Unexpected problem in creating temporary file for
> blk_-5814095875968936685_2910.  File
>
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_-5814095875968936685
> should not be present, but is.
> 2010-05-15 00:09:31,025 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.32.56.171:50010,
> storageID=DS-1723593983-10.32.56.171-50010-1273792791835, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Unexpected problem in creating temporary file for
> blk_926027507678171558_3620.  File
>
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_926027507678171558
> should not be present, but is.
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.createTmpFile(FSDataset.java:398)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.createTmpFile(FSDataset.java:376)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.createTmpFile(FSDataset.java:1133)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:1022)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:98)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-05-15 00:09:31,025 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.32.56.171:50010,
> storageID=DS-1723593983-10.32.56.171-50010-1273792791835, infoPort=50075,
> ipcPort=50020):DataXceiver
>
> 2010-05-15 00:19:28,334 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_926027507678171558_3620 src: /10.32.56.170:36887 dest: /
> 10.32.56.171:50010
> 2010-05-15 00:19:28,334 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_926027507678171558_3620 received exception java.io.IOException:
> Unexpected problem in creating temporary file for
> blk_926027507678171558_3620.  File
>
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_926027507678171558
> should not be present, but is.
> 2010-05-15 00:19:28,334 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> 10.32.56.171:50010,
> storageID=DS-1723593983-10.32.56.171-50010-1273792791835, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Unexpected problem in creating temporary file for
> blk_926027507678171558_3620.  File
>
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_926027507678171558
> should not be present, but is.
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.createTmpFile(FSDataset.java:398)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.createTmpFile(FSDataset.java:376)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.createTmpFile(FSDataset.java:1133)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:1022)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:98)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
>        at
>
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
>        at java.lang.Thread.run(Thread.java:619)
> 2010-05-15 00:29:25,635 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> blk_926027507678171558_3620 src: /10.32.56.170:34823 dest: /
> 10.32.56.171:50010
>
> On Mon, May 17, 2010 at 11:43 AM, Todd Lipcon <t...@cloudera.com> wrote:
>
> > Hi Ted,
> >
> > Can you please grep your NN and DN logs for blk_926027507678171558 and
> > pastebin the results?
> >
> > -Todd
> >
> > On Mon, May 17, 2010 at 9:57 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> > > Hi,
> > > We use CDH2 hadoop-0.20.2+228 which crashed on datanode
> smsrv10.ciq.com
> > >
> > > I found this in datanode log:
> > >
> > > 2010-05-15 07:37:35,955 INFO
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
> > > blk_926027507678171558_3620 src: /10.32.56.170:53378 dest: /
> > > 10.32.56.171:50010
> > > 2010-05-15 07:37:35,956 INFO
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> > > blk_926027507678171558_3620 received exception java.io.IOException:
> > > Unexpected problem in creating temporary file for
> > > blk_926027507678171558_3620.  File
> > >
> > >
> >
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_926027507678171558
> > > should not be present, but is.
> > > 2010-05-15 07:37:35,956 ERROR
> > > org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
> > > 10.32.56.171:50010,
> > > storageID=DS-1723593983-10.32.56.171-50010-1273792791835,
> infoPort=50075,
> > > ipcPort=50020):DataXceiver
> > > java.io.IOException: Unexpected problem in creating temporary file for
> > > blk_926027507678171558_3620.  File
> > >
> > >
> >
> /home/hadoop/m2m_3.0.x/3.0.trunk.39-270238/data/hadoop-data/dfs/data/tmp/blk_926027507678171558
> > > should not be present, but is.
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.createTmpFile(FSDataset.java:398)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.createTmpFile(FSDataset.java:376)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.FSDataset.createTmpFile(FSDataset.java:1133)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.FSDataset.writeToBlock(FSDataset.java:1022)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:98)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:259)
> > >        at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
> > >        at java.lang.Thread.run(Thread.java:619)
> > >
> > > Can someone provide some clue ?
> > >
> > > Thanks
> > >
> >
> >
> >
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to