Here is our code where pool is HTablePool:
            HTable table = pool.getTable(pTable);
                   Scan _scan = new Scan();
                   _scan.addColumn(pFamily.getBytes());
                try {
                    return table.getScanner(_scan);
                } catch (IOException _e) {

We couldn't get the scanner.

I found in regionserver log:

2010-03-05 15:44:09,234 DEBUG [pool-1-thread-1] hfile.LruBlockCache(551):
Cache Stats: Sizes: Total=46.31048MB (48560056), Free=1179.1646MB
(1236443592), Max=1225.475MB (1285003648), Counts: Blocks=481,
Access=1508314, Hit=1482340, Miss=25974, Evictions=0, Evicted=0, Ratios: Hit
Ratio=98.27794432640076%, Miss Ratio=1.7220553010702133%, Evicted/Run=NaN
2010-03-05 15:44:53,964 WARN  [ResponseProcessor for block
blk_-467155928723148214_478185]
hdfs.DFSClient$DFSOutputStream$ResponseProcessor(2440): DFSOutputStream
ResponseProcessor exception  for block
blk_-467155928723148214_478185java.io.IOException: Bad response 1 for block
blk_-467155928723148214_478185 from datanode 10.10.31.135:50010
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2423)

2010-03-05 15:44:53,964 WARN  [DataStreamer for file /hbase/.logs/
snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1267860203609
block blk_-467155928723148214_478185] hdfs.DFSClient$DFSOutputStream(2476):
Error Recovery for block blk_-467155928723148214_478185 bad datanode[2]
10.10.31.135:50010
2010-03-05 15:44:53,966 WARN  [DataStreamer for file /hbase/.logs/
snv-it-lin-011.projectrialto.com,60020,1267695848509/hlog.dat.1267860203609
block blk_-467155928723148214_478185] hdfs.DFSClient$DFSOutputStream(2531):
Error Recovery for block blk_-467155928723148214_478185 in pipeline
10.10.31.136:50010, 10.10.31.137:50010, 10.10.31.135:50010: bad datanode
10.10.31.135:50010
2010-03-05 15:45:09,234 DEBUG [pool-1-thread-1] hfile.LruBlockCache(551):
Cache Stats: Sizes: Total=46.31048MB (48560056), Free=1179.1646MB
(1236443592), Max=1225.475MB (1285003648), Counts: Blocks=481,
Access=1508314, Hit=1482340, Miss=25974, Evictions=0, Evicted=0, Ratios: Hit
Ratio=98.27794432640076%, Miss Ratio=1.7220553010702133%, Evicted/Run=NaN
2010-03-05 15:45:33,245 INFO  [regionserver/10.10.31.136:60020.leaseChecker]
regionserver.HRegionServer$ScannerListener(1995): Scanner
-761693286771808734 lease expired
2010-03-05 15:45:42,351 INFO  [regionserver/10.10.31.136:60020.leaseChecker]
regionserver.HRegionServer$ScannerListener(1995): Scanner
-6705950189842286968 lease expired
2010-03-05 15:46:01,351 INFO  [regionserver/10.10.31.136:60020.leaseChecker]
regionserver.HRegionServer$ScannerListener(1995): Scanner
5506641803568618255 lease expired

We have this in hbase-site.xml:
  <property>
    <name>hbase.regionserver.lease.period</name>
    <value>180000</value>
  </property>

I am wondering if I can ignore 'Bad response 1 for block' warning.

Thanks

On Fri, Mar 5, 2010 at 4:23 PM, Jean-Daniel Cryans <[email protected]>wrote:

> That happens when you spend more than 1 minute between each call to a
> region server (in the region server log you should see a "scanner
> lease expired". If you are using scan.setCaching(x), then you must
> spend less than 1 minute processing x rows.
>
> Either make sure you spend less than 1 minute or you can just create a
> new scan and set the start row to the latest one you saw.
>
> J-D
>
> On Fri, Mar 5, 2010 at 4:13 PM, Ted Yu <[email protected]> wrote:
> > Hi,
> > We use HBase 0.20.1
> > I saw the following in regionserver log:
> > 2010-03-05 16:02:57,952 ERROR [IPC Server handler 60 on 60020]
> > regionserver.HRegionServer(844):
> > org.apache.hadoop.hbase.UnknownScannerException: Name: -1
> >  at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1925)
> >  at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> >  at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >  at java.lang.reflect.Method.invoke(Method.java:597)
> >  at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:648)
> >  at
> > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)
> > 2010-03-05 16:03:09,234 DEBUG [pool-1-thread-1] hfile.LruBlockCache(551):
> > Cache Stats: Sizes: Total=46.31048MB (48560056), Free=1179.1646MB
> > (1236443592), Max=1225.475MB (1285003648), Counts: Blocks=481,
> > Access=1508314, Hit=1482340, Miss=25974, Evictions=0, Evicted=0, Ratios:
> Hit
> > Ratio=98.27794432640076%, Miss Ratio=1.7220553010702133%, Evicted/Run=NaN
> >
> > At the same time, client complained:
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to
> contact
> > region server 10.10.31.136:60020 for region ruletable,,1267831180107,
> row
> > '', but failed after 10 attempts.
> > Exceptions:
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> > java.io.IOException: Call to /10.10.31.136:60020 failed on local
> exception:
> > java.io.EOFException
> >
> >        at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:1048)
> >        at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1935)
> >        at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.initialize(HTable.java:1855)
> >        at
> org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:376)
> >        at
> >
> net.kindsight.webmap.rules.datastore.HBaseDataStore.get(HBaseDataStore.java:297)
> >
> > What should I do to get pass the UnknownScannerException ?
> >
> > Thanks
> >
>

Reply via email to