Hi Marcos,

Please fine the below code, this is working fine in pseudo distributed node
but not in fully distributed cluster.
We are using static for Configuration & HtablePool.


 *   public static class ParsingMap extends MapReduceBase implements
Mapper<LongWritable, Text, Text, Text>
    {
      public static HashMap<String,String> mapHashMap = null;
      public static int count = 0;
      public static String source_format = null;
      static Configuration conf = null;
      static HTablePool htp = null;

     static
     {

               conf = HBaseConfiguration.create();
            htp  = new HTablePool(conf,1);
     }

     private Text word = new Text();

     public HashMap<String,String> getHash()
     {
         return mapHashMap;
     }

     public void map(LongWritable key, Text value,OutputCollector<Text,
Text> output, Reporter reporter) throws IOException
     {

       try {
           strLine = value.toString();
           String[] strLine1 = strLine.split("\\t");
           String strLineKey = strLine1[0];
           String strLineValue = strLine1[1];

           Map<String,String> abc = null;
           if(strLineValue!=null)
           {
                abc =
MappingClass.mapValues(strLineValue,mapHashMap,strLineKey,htp);

           }
        }
       catch (Exception e)
       {
       e.printStackTrace();
       }
     }
    }


    public static class ParsingReduce extends MapReduceBase implements
Reducer<Text, Text, Text, Text>
    {
        public static HashMap<String,String> mapHashMap = null;
        static Configuration conf = null;
        static HTablePool htp = null;

        static
        {
            conf = HBaseConfiguration.create();
          htp  = new HTablePool(conf,1);
        }



     public void reduce(Text key, Iterator<Text>
values,OutputCollector<Text, Text> output, Reporter reporter) throws
IOException
       {
         String val = null;
         String[] v = null;

         HTableInterface table1 = null;
         Put p = null;

         if(key.toString().equals(key.toString()))
         {

             table1 = htp.getTable(key.toString());
             String k =
mapHashMap.get("phase2_folders."+key.toString()+"_format");
             String[] kArray =k.split(",");


        while (values.hasNext())
        {


            p = new Put(v[m].getBytes());

            for(int i=0;i<kArray.length;i++)
             {

                p.add(key.toString().getBytes(),
kArray[i].getBytes(),p.getTimeStamp(), v[i].getBytes());

            }
            table1.put(p);
         }

        table1.close();
        htp.closeTablePool(key.toString());
       }
    }

 }*


Appreciate your help on this.

Thanks,
Manu S

On Thu, Jun 7, 2012 at 2:35 AM, Marcos Ortiz <mlor...@uci.cu> wrote:

>  Can you show us the code that you are developing?
> Which HBase version are you using ?
>
> Yo should check if you are creating multiples HBaseConfiguration objects.
> The approach to this is to create one single HBaseConfiguration object and
> then
> reuse it in all your code.
>
> Regards
>
>
>
> On 06/06/2012 10:25 AM, Manu S wrote:
>
> Hi All,
>
> We are running a mapreduce job in a fully distributed cluster.The output
> of the job is writing to HBase.
>
> While running this job we are getting an error:
>
> *Caused by: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is 
> able to connect to ZooKeeper but the connection closes immediately. This 
> could be a sign that the server has too many connections (30 is the default). 
> Consider inspecting your ZK server logs for that error and then make sure you 
> are reusing HBaseConfiguration as often as you can. See HTable's javadoc for 
> more information.*
>       at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
>       at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:169)
>       at 
> org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:36)
>
>
> I had gone through some threads related to this issue and I modified the *
> zoo.cfg* accordingly. These configurations are same in all the nodes.
> Please find the configuration of HBase & ZooKeeper:
>
> Hbase-site.xml:
>
> <configuration>
>
> <property>
> <name>hbase.cluster.distributed</name>
> <value>true</value>
> </property>
>
> <property>
> <name>hbase.rootdir</name>
> <value>hdfs://namenode/hbase</value>
> </property>
>
> <property>
> <name>hbase.zookeeper.quorum</name>
> <value>namenode</value>
> </property>
>
> </configuration>
>
>
> Zoo.cfg:
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=10
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=5
> # the directory where the snapshot is stored.
> dataDir=/var/zookeeper
> # the port at which the clients will connect
> clientPort=2181
> #server.0=localhost:2888:3888
> server.0=namenode:2888:3888
>
> ################# Max Client connections ###################
> *maxClientCnxns=1000
> minSessionTimeout=4000
> maxSessionTimeout=40000*
>
>
> It would be really great if anyone can help me to resolve this issue by
> giving your thoughts/suggestions.
>
> Thanks,
> Manu S
>
>
> --
> Marcos Luis Ortíz Valmaseda
>  Data Engineer && Sr. System Administrator at UCI
>  http://marcosluis2186.posterous.com
>  http://www.linkedin.com/in/marcosluis2186
>  Twitter: @marcosluis2186
>
>
>   <http://www.uci.cu/>
>
>

Reply via email to