Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

2013-11-12 Thread chandu banavaram
plz send the answer to me  for this query


On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni wrote:

> While running job with 90 Mb file i am getting LeaseExpiredException
>
> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.
> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to process
> : 1
> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
> job_201310301645_25033
> 13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_00_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1561990512_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>  at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>  at org.
> attempt_201310301645_25033_m_00_0: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_00_0: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_00_0: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_00_0: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> 13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_00_1, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> Lease mismatch on /user/hdfs/in/map owned by
> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
> DFSClient_NONMAPREDUCE_-1662926329_1
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>  at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>  at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>  at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java
> attempt_201310301645_25033_m_00_1: SLF4J: Class path contains multiple
> SLF4J bindings.
> attempt_201310301645_25033_m_00_1: SLF4J: Found binding in
> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_00_1: SLF4J: Found binding in
> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> attempt_201310301645_25033_m_00_1: SLF4J: See
> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> attempt_201310301645_25033_m_00_1: log4j:WARN No appenders could be
> found for logger (org.apache.hadoop.hdfs.DFSClient).
> attempt_201310301645_25033_m_00_1: log4j:WARN Please initialize the
> log4j system properly.
> attempt_201310301645_25033_m_00_1: log4j:WARN See
> http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> 13/11/12 15:47:10 INFO mapred.JobClient: Task Id :
> attempt_201310301645_25033_m_01_0, Status : FAILED
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
> No lease on /user/hdfs/in/map: File is not open for writing. Holder
> DFSClient_NONMAPREDUCE_-1622335545_1 does not have any open files.
>  at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2452)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>  at
> org.apache.had

Re: LeaseExpiredException : Lease mismatch in Hadoop mapReduce| How to solve?

2013-11-14 Thread chandu banavaram
thanks


On Thu, Nov 14, 2013 at 10:18 PM, unmesha sreeveni wrote:

> @chandu banavaram:
> This exception usually happens if hdfs is trying to write into a file
> which is no more in hdfs..
>
> I think in my case certain files are not created in my hdfs.it failed to
> create due to some permissions.
>
> I am trying out.
>
>
> On Wed, Nov 13, 2013 at 9:25 AM, unmesha sreeveni 
> wrote:
>
>> :) Ok
>> Why u also experienced the same?
>>
>>
>> On Tue, Nov 12, 2013 at 5:14 PM, chandu banavaram <
>> chandu.banava...@gmail.com> wrote:
>>
>>> plz send the answer to me  for this query
>>>
>>>
>>> On Tue, Nov 12, 2013 at 2:52 AM, unmesha sreeveni >> > wrote:
>>>
>>>> While running job with 90 Mb file i am getting LeaseExpiredException
>>>>
>>>> 13/11/12 15:46:41 WARN mapred.JobClient: Use GenericOptionsParser for
>>>> parsing the arguments. Applications should implement Tool for the same.
>>>> 13/11/12 15:46:42 INFO input.FileInputFormat: Total input paths to
>>>> process : 1
>>>> 13/11/12 15:46:43 INFO mapred.JobClient: Running job:
>>>> job_201310301645_25033
>>>>  13/11/12 15:46:44 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 13/11/12 15:46:56 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_00_0, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1561990512_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2437)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2503)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2480)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:556)
>>>> at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:337)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44958)
>>>>  at org.
>>>> attempt_201310301645_25033_m_00_0: SLF4J: Class path contains
>>>> multiple SLF4J bindings.
>>>> attempt_201310301645_25033_m_00_0: SLF4J: Found binding in
>>>> [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_00_0: SLF4J: Found binding in
>>>> [jar:file:/tmp/hadoop-mapred/mapred/local/taskTracker/hdfs/jobcache/job_201310301645_25033/jars/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>> attempt_201310301645_25033_m_00_0: SLF4J: See
>>>> http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>>>>  13/11/12 15:47:02 INFO mapred.JobClient: Task Id :
>>>> attempt_201310301645_25033_m_00_1, Status : FAILED
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>>>> Lease mismatch on /user/hdfs/in/map owned by
>>>> DFSClient_NONMAPREDUCE_-1622335545_1 but is accessed by
>>>> DFSClient_NONMAPREDUCE_-1662926329_1
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2459)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
>>>>  at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
>>>>  at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
>>>>  at
>>>> org.apache.

[no subject]

2013-12-12 Thread chandu banavaram
Hi Expert,

I want to known that when client wants to store data into HDFS who will
divide the big data into blocks and then stored in DataNodes.
  I mean when the client approachs the NameNode to store
data who and how the data is dividing into Blocks and then sent it to the
DataNodes.

  please reply me the answer.

with Regards,
chandu.


Re:

2013-12-13 Thread chandu banavaram
Hi Mirko,

I doesn't know how to write MapReduce Jobs, so can you please suggest any
website links or can you send any notes to me

Thank you,for your answer, because of you cleared one doubt.

with Regards,
chandu.


On Thu, Dec 12, 2013 at 7:05 PM, Mirko Kämpf  wrote:

> The procedure of splitting the larger file into blocks is handled by the
> client. It delivers each block to a DataNode (can be a different one for
> each block, but does not have to be, e.g. in a pseudo distributed cluster
> we have only one node). Replication of the blocks is handled in the cluster
> by DataNodes and later also by the Balancer. Did you dive already into the
> source code of the HDFS client implementation? There you will find the
> details you are looking for.
>
> Best wishes
> Mirko
>
>
>
> 2013/12/12 chandu banavaram 
>
>> Hi Expert,
>>
>> I want to known that when client wants to store data into HDFS who will
>> divide the big data into blocks and then stored in DataNodes.
>>   I mean when the client approachs the NameNode to store
>> data who and how the data is dividing into Blocks and then sent it to the
>> DataNodes.
>>
>>   please reply me the answer.
>>
>> with Regards,
>> chandu.
>>
>
>


[no subject]

2014-01-05 Thread chandu banavaram
hi experts,
plz clarifies the following doubt i am a hadoop learner

how to generate the hive reports

with regards,
chandu.


unsubscribe me

2014-12-03 Thread chandu banavaram
please unsubscribe me


unsubscribe me

2014-12-04 Thread chandu banavaram
ya i am sure please unsubscribe me


Re: unsubscribe

2015-09-20 Thread chandu banavaram
Please unsubscribe me.

On Mon, Sep 21, 2015 at 6:02 AM, Jiang Xiaodong  wrote:

> unsubscribe
>
> --
> Thanks
> -Xiaodong
>
>


Re: Unsubscribe

2016-01-22 Thread chandu banavaram
unsubscribe me

On Fri, Jan 22, 2016 at 4:01 PM, Doris Donley <
do...@dorisdonley1953.onmicrosoft.com> wrote:

>
>


Re: unsubscribe

2016-06-28 Thread chandu banavaram
please unsubscribe me.

On Fri, Jun 24, 2016 at 12:41 PM, Anand Sharma 
wrote:

>
> --
> Thanks
> Anand
>