Re: java.io.IOException: config()

2011-08-06 Thread jagaran das
I am accessing through threads in parallel.

What is the concept of Lease in HDFS??

Regards,
JD



From: Harsh J 
To: jagaran das 
Sent: Friday, 5 August 2011 11:37 PM
Subject: Re: java.io.IOException: config()


How long are you keeping it open for?


On 06-Aug-2011, at 10:14 AM, jagaran das wrote:

Hi,
>
>
>I am using CDH3.
>I need to stream huge amount of data from our application to hadoop.
>I am opening a connection like
> 
>config.set("fs.default.name",hdfsURI);
>FileSystem dfs = FileSystem.get(config);
>String path = hdfsURI + connectionKey;
>Path destPath = new Path(path);
>logger.debug("Path -- " + destPath.getName());
>outStream = dfs.create(destPath);
>and keeping the outStream open for some time and writing continuously through 
>it and then closing it.
>But it is throwing 
>
>
> 
>5Aug2011 21:36:48,550 DEBUG 
>[LeaseChecker@DFSClient[clientName=DFSClient_218151655, ugi=jagarandas]: 
>java.lang.Throwable: for testing
>at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.toString(DFSClient.java:1181)
>at org.apache.hadoop.util.Daemon.(Daemon.java:38)
>at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.put(DFSClient.java:1094)
>at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:547)
>at 
>org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:219)
>at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
>at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
>at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
>at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
>at 
>com.apple.ireporter.common.persistence.ConnectionManager.createConnection(ConnectionManager.java:66)
>at 
>com.apple.ireporter.common.persistence.HDPPersistor.writeToHDP(HDPPersistor.java:93)
>at 
>com.apple.ireporter.datatransformer.translator.HDFSTranslator.persistData(HDFSTranslator.java:41)
>at 
>com.apple.ireporter.datatransformer.adapter.TranslatorAdapter.processData(TranslatorAdapter.java:61)
>at 
>com.apple.ireporter.datatransformer.DefaultMessageListener.persistValidatedData(DefaultMessageListener.java:276)
>at 
>com.apple.ireporter.datatransformer.DefaultMessageListener.onMessage(DefaultMessageListener.java:93)
>at 
>org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:506)
>at 
>org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:463)
>at 
>org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:435)
>at 
>org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:322)
>at 
>org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:260)
>at 
>org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:944)
>at 
>org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:868)
>at java.lang.Thread.run(Thread.java:680)
>] (RPC.java:230) - Call: renewLease 4
>05Aug2011 21:36:48,550 DEBUG [listenerContainer-1] (DFSClient.java:3274) - 
>DFSClient writeChunk allocating new packet seqno=0, 
>src=/home/hadoop/listenerContainer-1jagaran-dass-macbook-pro.local_247811312605307819,
> packetSize=65557, chunksPerPacket=127, bytesCurBlock=0
>05Aug2011 21:36:48,551 DEBUG [Thread-11] (DFSClient.java:2499) - Allocating 
>new block
>05Aug2011 21:36:48,552 DEBUG [sendParams-0] (Client.java:761) - IPC Client 
>(47) connection to localhost/127.0.0.1:8020 from jagarandas sending #3
>05Aug2011 21:36:48,553 DEBUG [IPC Client (47) connection to 
>localhost/127.0.0.1:8020 from jagarandas] (Client.java:815) - IPC Client (47) 
>connection to localhost/127.0.0.1:8020 from jagarandas got value #3
>05Aug2011 21:36:48,556 DEBUG [Thread-11] (RPC.java:230) - Call: addBlock 4
>05Aug2011 21:36:48,557 DEBUG [Thread-11] (DFSClient.java:3094) - pipeline = 
>127.0.0.1:50010
>05Aug2011 21:36:48,557 DEBUG [Thread-11] (DFSClient.java:3102) - Connecting to 
>127.0.0.1:50010
>05Aug2011 21:36:48,559 DEBUG [Thread-11] (DFSClient.java:3109) - Send buf size 
>131072
>05Aug2011 21:36:48,635 DEBUG [DataStreamer for file 
>/home/hadoop/listenerContainer-1jagaran-dass-macbook-pro.local_247811312605307819
> block blk_-5183404460805094255_1042] (DFSClient.java:2533) - DataStreamer 
>block blk_-5183404460805094255_1042 wrote packet seqno:0 size:1522 
>offsetInBlock:0 lastPacketInBlock:true
>05Aug2011 21:36:48,638 DEBUG [ResponseProcessor for block 
>blk_-5183404460805094255_1042] (DFSClient.java:2640) - DFSClient Replies for 
>seqno 0 are SUCCESS
>05Aug2011 21:36:48,639 DEBUG [DataStreamer for file 
>/home/hadoop/listenerContainer-1jagaran-dass-macbook-pro.local_247811312605307

Help on DFSClient

2011-08-06 Thread jagaran das
I am keeping a Stream Open and writing through it using a multithreaded 
application.
The application is in a different box and I am connecting to NN remotely.

I was using FileSystem and getting same error and now I am trying DFSClient and 
getting the same error.

When I am running it via simple StandAlone class, it is not throwing any error 
but when i put that in my Application, it is throwing this error.

Please help me with this.

Regards,
JD 

  
 public String toString() {
      String s = getClass().getSimpleName();
      if (LOG.isTraceEnabled()) {
        return s + "@" + DFSClient.this + ": "
               + StringUtils.stringifyException(new Throwable("for testing"));
      }
      return s;
    }

My Stack Trace :::

  
06Aug2011 12:29:24,345 DEBUG [listenerContainer-1] (DFSClient.java:1115) - Wait 
for lease checker to terminate
06Aug2011 12:29:24,346 DEBUG 
[LeaseChecker@DFSClient[clientName=DFSClient_280246853, ugi=jagarandas]: 
java.lang.Throwable: for testing
at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.toString(DFSClient.java:1181)
at org.apache.hadoop.util.Daemon.(Daemon.java:38)
at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.put(DFSClient.java:1094)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:547)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:513)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:497)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:442)
at 
com.apple.ireporter.common.persistence.ConnectionManager.createConnection(ConnectionManager.java:74)
at 
com.apple.ireporter.common.persistence.HDPPersistor.writeToHDP(HDPPersistor.java:95)
at 
com.apple.ireporter.datatransformer.translator.HDFSTranslator.persistData(HDFSTranslator.java:41)
at 
com.apple.ireporter.datatransformer.adapter.TranslatorAdapter.processData(TranslatorAdapter.java:61)
at 
com.apple.ireporter.datatransformer.DefaultMessageListener.persistValidatedData(DefaultMessageListener.java:276)
at 
com.apple.ireporter.datatransformer.DefaultMessageListener.onMessage(DefaultMessageListener.java:93)
at 
org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:506)
at 
org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:463)
at 
org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:435)
at 
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:322)
at 
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:260)
at 
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:944)
at 
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:868)
at java.lang.Thread.run(Thread.java:680)

Slow generation of blockReport at DataNode

2011-08-06 Thread Joe Stein
Does anyone have some workarounds I can try for "Slow generation of
blockReport at DataNode causes delay of sending heartbeat to NameNode"?  I
see it is closed JIRA https://issues.apache.org/jira/browse/HADOOP-4584 in
0.21 release but looking to-do something now that will solve this without
having to upgrade.

Any folks have this that have worked around it or anyone have any ideas it
would be greatly appreciated.

I have already tried cronning  find ./ to my hdfs directories but no go.

Thanks in advance.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
Twitter: @allthingshadoop
*/


NameNode Profiling Tools

2011-08-06 Thread jagaran das
Hi,

Please suggest what would be the best way to profile NameNode?
Any specific tools.
We would streaming transaction data using around 2000 threads concurrently to 
NameNode continuously. Size is around 300 KB/transaction
I am using DataInputStream and writing continuously for through each 2000 
connections for 5 mins and then closing and reopening again new 2000 
connections.
 Any benchmarks on CPU and Mem utilization of NameNode ?

My NameNode Box Config:
1. HPDL360 G7 2X2.66GHz CPU's, 72 GB RAM, 8X300GB Drives.

Regards,
JD