I was trying a crawl with 200 seeds. In previous cases it used to create the index with out any problem , now when i started the crawl its show the following exception at depth 2
attempt_201003301923_0007_m_000000_0: Aborting with 100 hung threads. Task attempt_201003301923_0007_m_000004_0 failed to report status for 1865 seconds. Killing! Task attempt_201003301923_0007_r_000000_0 failed to report status for 1243 seconds. Killing! org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /user/nutch/crawled/segments/20100330193414/crawl_fetch/part-00002/index for DFSClient_attempt_201003301923_0007_r_000002_1 on client 192.168.101.155 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1055) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:998) at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:301) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894) at org.apache.hadoop.ipc.Client.call(Client.java:697) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) at $Proxy1.create(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.create(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2585) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:454) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:190) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:487) at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1198) at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:401) at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:306) at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:160) at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:134) at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:92) at org.apache.nutch.fetcher.FetcherOutputFormat.getRecordWriter(FetcherOutputFormat.java:66) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:404) at org.apache.hadoop.mapred.Child.main(Child.java:158) org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /user/nutch/crawled/segments/20100330193414/crawl_fetch/part-00002/index for DFSClient_attempt_201003301923_0007_r_000002_2 on client 192.168.101.155 because current leaseholder is trying to recreate file. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1055) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:998) at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:301) at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:894) at org.apache.hadoop.ipc.Client.call(Client.java:697) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) at $Proxy1.create(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) at $Proxy1.create(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2585) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:454) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:190) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:487) at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1198) at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:401) at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:306) at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:160) at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:134) at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:92) at org.apache.nutch.fetcher.FetcherOutputFormat.getRecordWriter(FetcherOutputFormat.java:66) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:404) at org.apache.hadoop.mapred.Child.main(Child.java:158) Exception in thread "main" java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:969) at org.apache.nutch.fetcher.Fetcher.main(Fetcher.java:1003) runbot: fetch 20100330193414 at depth 2 failed. runbot: Deleting segment 20100330193414. can any one help me out.. Thanks in advance -- View this message in context: http://n3.nabble.com/Problem-with-writing-index-tp687686p687686.html Sent from the Nutch - User mailing list archive at Nabble.com.