you got 2 nodes. if the setting dfs.replication is set to 2 and one if
faster than the other (or the namenode choose it for reads) it is normal
that it acts like this.
--
Pierre-Alexandre St-Jean


On Thu, Apr 15, 2010 at 2:54 PM, Geoff Hendrey <[email protected]> wrote:

> My Hbase is running on top of an HDFS instance with two datanodes. The
> datanodes are on different machines. When I scan an HBase table, the
> logs of the two datanodes look very different, and I am wondering why.
> One of the nodes shows lots of reads, while the other node shows mainly
> "Verification succeeded for blk"
>
> Here is a snippet from the datanode that shows lots of reads
>
> 2010-04-15 05:33:02,529 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /10.241.6.79:50010, dest: /10.241.6.79:49551, bytes: 140113, op:
> HDFS_READ, cliID: DFSClient_-290354743, srvID:
> DS-1729078499-10.241.6.79-50010-1271178291303, blockid:
> blk_2338086647721232803_30645
>
> And here a snippet from the the node that doesn't show any read
> activity, and in general isn't very active
>
> 2010-04-15 05:13:48,858 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.cli
> enttrace: src: /10.241.6.79:52332, dest: /10.241.6.80:50010, bytes:
> 162299, op:
> HDFS_WRITE, cliID: DFSClient_-290354743, srvID:
> DS-642079670-10.241.6.80-50010-1
> 271178858027, blockid: blk_-5368905552009678521_30627
> 2010-04-15 05:13:48,858 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Pa
> cketResponder 0 for block blk_-5368905552009678521_30627 terminating
> 2010-04-15 05:16:47,324 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockSca
> nner: Verification succeeded for blk_1891611024781487471_15100
>
>
> Any ideas why the activity on the datanode doesn't seem symmetric? Is
> there any way I can determine the region boundaries for a table?
>
>
> Geoff Hendrey
>
> Software Architect
> deCarta
> Four North Second Street, Suite 950
> San Jose, CA  95113
> office 408.625.3522
> www.decarta.com <blocked::http://www.decarta.com>
>
>
>
>
>

Reply via email to