What do you have for max sessions on your zk ensemble? Maybe you have default 30 and then you are running more than this many maps at the one time? St.Ack
On Sat, Jul 9, 2011 at 5:26 AM, Rohit Nigam <[email protected]> wrote: > Hi Guys > > I again ran the job for exporting and it died with these exceptions > :-- > > > > [exec] java.io.IOException: Connection reset by peer > > [exec] at sun.nio.ch.FileDispatcher.read0(Native Method) > > [exec] at > sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > > [exec] at > sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) > > [exec] at sun.nio.ch.IOUtil.read(IOUtil.java:218) > > [exec] at > sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:858) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1130) > > [exec] 2011-07-09 02:24:40,076 WARN > [Thread-33-SendThread(doop7.dt.sv4.decarta.com:2181)] > zookeeper.ClientCnxn$SendThread(1161): Session 0x0 for server > doop7.dt.sv4.decarta.com/10.241.8.227:2181, unexpected error, closing > socket connection and attempting reconnect > > [exec] java.io.IOException: Connection reset by peer > > [exec] at sun.nio.ch.FileDispatcher.read0(Native Method) > > [exec] at > sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > > [exec] at > sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) > > [exec] at sun.nio.ch.IOUtil.read(IOUtil.java:218) > > [exec] at > sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:858) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1130) > > [exec] 2011-07-09 02:24:40,654 WARN > [Thread-33-SendThread(doop5.dt.sv4.decarta.com:2181)] > zookeeper.ClientCnxn$SendThread(1161): Session 0x0 for server > doop5.dt.sv4.decarta.com/10.241.8.225:2181, unexpected error, closing > socket connection and attempting reconnect > > [exec] java.io.IOException: Connection reset by peer > > [exec] at sun.nio.ch.FileDispatcher.read0(Native Method) > > [exec] at > sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > > [exec] at > sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) > > [exec] at sun.nio.ch.IOUtil.read(IOUtil.java:218) > > [exec] at > sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:858) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1130) > > [exec] 2011-07-09 02:24:41,591 WARN > [Thread-33-SendThread(doop9.dt.sv4.decarta.com:2181)] > zookeeper.ClientCnxn$SendThread(1161): Session 0x0 for server > doop9.dt.sv4.decarta.com/10.241.8.229:2181, unexpected error, closing > socket connection and attempting reconnect > > [exec] java.io.IOException: Connection reset by peer > > [exec] at sun.nio.ch.FileDispatcher.read0(Native Method) > > [exec] at > sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > > [exec] at > sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) > > [exec] at sun.nio.ch.IOUtil.read(IOUtil.java:218) > > [exec] at > sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:858) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1130) > > [exec] 2011-07-09 02:24:41,693 ERROR [Thread-33] > mapreduce.TableInputFormat(93): > org.apache.hadoop.hbase.ZooKeeperConnectionException: > org.apache.hadoop.hbase.ZooKeeperConnectionException: > org.apache.zookeeper.KeeperException$ConnectionLossException: > KeeperErrorCode = ConnectionLoss for /hbase > > [exec] at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat > ion.getZooKeeperWatcher(HConnectionManager.java:988) > > [exec] at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat > ion.setupZookeeperTrackers(HConnectionManager.java:301) > > [exec] at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat > ion.<init>(HConnectionManager.java:292) > > [exec] at > org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnect > ionManager.java:155) > > [exec] at > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:167) > > [exec] at > org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:145) > > [exec] at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.setConf(TableInputFor > mat.java:91) > > [exec] at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) > > [exec] at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java: > 117) > > [exec] at > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:605) > > [exec] at > org.apache.hadoop.mapred.MapTask.run(MapTask.java:322) > > [exec] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) > > [exec] Caused by: > org.apache.hadoop.hbase.ZooKeeperConnectionException: > org.apache.zookeeper.KeeperException$ConnectionLossException: > KeeperErrorCode = ConnectionLoss for /hbase > > [exec] at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatch > er.java:147) > > [exec] at > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementat > ion.getZooKeeperWatcher(HConnectionManager.java:986) > > [exec] ... 11 more > > [exec] Caused by: > org.apache.zookeeper.KeeperException$ConnectionLossException: > KeeperErrorCode = ConnectionLoss for /hbase > > [exec] at > org.apache.zookeeper.KeeperException.create(KeeperException.java:90) > > [exec] at > org.apache.zookeeper.KeeperException.create(KeeperException.java:42) > > [exec] at > org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:637) > > [exec] at > org.apache.hadoop.hbase.zookeeper.ZKUtil.createAndFailSilent(ZKUtil.java > :902) > > [exec] at > org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatch > er.java:133) > > [exec] ... 12 more > > > > [exec] 2011-07-09 02:24:41,723 WARN [Thread-33] > mapred.LocalJobRunner$Job(293): job_local_0001 > > [exec] java.io.IOException: Cannot create a record reader because > of a previous error. Please look at the previous logs lines from the > task's full log for more details. > > [exec] at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.createRecordReade > r(TableInputFormatBase.java:98) > > [exec] at > org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:613) > > [exec] at > org.apache.hadoop.mapred.MapTask.run(MapTask.java:322) > > [exec] at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) > > [exec] 2011-07-09 02:24:42,123 WARN > [Thread-33-SendThread(doop4.dt.sv4.decarta.com:2181)] > zookeeper.ClientCnxn$SendThread(1161): Session 0x0 for server > doop4.dt.sv4.decarta.com/10.241.8.224:2181, unexpected error, closing > socket connection and attempting reconnect > > [exec] java.io.IOException: Connection reset by peer > > [exec] at sun.nio.ch.FileDispatcher.read0(Native Method) > > [exec] at > sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > > [exec] at > sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) > > [exec] at sun.nio.ch.IOUtil.read(IOUtil.java:218) > > [exec] at > sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.doIO(ClientCnxn.java:858) > > [exec] at > org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1130) > > [exec] Result: 1 > > > > Any ideas what could be causing it? > > Rohit > > > > From: Rohit Nigam > Sent: Friday, July 08, 2011 8:39 PM > To: [email protected] > Cc: Search > Subject: Table Regions not getting exported properly > > > > Hi > > I am exporting a table from one cluster to a another cluster using > native "export" feature but the table is not getting exported fully , > it somehow is missing regions at the end. The command fully finishes > without any error. Can somebody help me as to what could be going wrong > in the export. > > > > This is the ant script which I am using to export. > > > > <target name="export"> > > <exec executable="${hadoop-home}/bin/hadoop" > failonerror="false"> > > <arg value="--config"/> > > <arg value="${hbase-conf}"/> > > <arg value="jar"/> > > <arg value="${hbase-jar}"/> > > <arg value="export"/> > > <arg value="${table-name}"/> > > <arg value="${hdfs-table-path}"/> > > </exec> > > </target> > > > > Any help would be appreciated. > > > > Rohit > > > >
