Re: Region server not accept connections intermittently

2014-07-10 Thread Rural Hunter
I got the dump of the problematic rs from web ui: 
http://pastebin.com/4hfhkDUw

output of top -H -p PID: http://pastebin.com/LtzkScYY
I also got the output of jstack but I believe it's already in the dump 
so I do not paste it again. This time the hang lasted about 20 minutes.


于 2014/7/9 12:48, Esteban Gutierrez 写道:

Hi Rural,

Thats interesting. Since you are passing
hbase.zookeeper.property.maxClientCnxns does it means that ZK is managed by
HBase? If you experience the issue again, can you try to obtain a jstack
(as the user that started the hbase process or try from the RS UI if
responsive rs:port/dump) as Ted suggested? the output of top -H -p PID
might be useful too where PID is the pid of the RS. If you have some
metrics monitoring it would be interesting to see how callQueueLength and
the blocked threads change over time.

cheers,
esteban.


--
Cloudera, Inc.






Re: Region server not accept connections intermittently

2014-07-10 Thread Ted Yu
I noticed the blockSeek() call in HFileReaderV2. 

Did you take only one dump during the 20 minute hang ?

Cheers

On Jul 10, 2014, at 1:54 AM, Rural Hunter ruralhun...@gmail.com wrote:

 I got the dump of the problematic rs from web ui: http://pastebin.com/4hfhkDUw
 output of top -H -p PID: http://pastebin.com/LtzkScYY
 I also got the output of jstack but I believe it's already in the dump so I 
 do not paste it again. This time the hang lasted about 20 minutes.
 
 于 2014/7/9 12:48, Esteban Gutierrez 写道:
 Hi Rural,
 
 Thats interesting. Since you are passing
 hbase.zookeeper.property.maxClientCnxns does it means that ZK is managed by
 HBase? If you experience the issue again, can you try to obtain a jstack
 (as the user that started the hbase process or try from the RS UI if
 responsive rs:port/dump) as Ted suggested? the output of top -H -p PID
 might be useful too where PID is the pid of the RS. If you have some
 metrics monitoring it would be interesting to see how callQueueLength and
 the blocked threads change over time.
 
 cheers,
 esteban.
 
 
 --
 Cloudera, Inc.
 


Re: error during Hbase running

2014-07-10 Thread Bharath Vissapragada
Hey,

+user@hbase

Make sure you are placing 2.4 version hadoop jars in hbase/lib folder so
that HBase picks correct jars during runtime. This needs to be done across
all the nodes in the cluster.

- Bharath


On Thu, Jul 10, 2014 at 1:57 PM, Prashasti Agrawal 
prashasti.iit...@gmail.com wrote:

 Do I need to add some jar files in hbase lib folder? (I read on some
 blogs) but I cannot find the hadoop-core-2.4.0.jar in my hadoop_home


 On Thursday, 10 July 2014 13:29:11 UTC+5:30, Prashasti Agrawal wrote:

 Hi,
 I followed the changes mentioned in the site. I made the desired changes
 in the pom file (replacind 2.2 with 2.4 everywhere) and built the project
 with mavin using   mvn clean install -Dhadoop.profile=2.0 -DskipTests. But
 the error still shows up the same.

 On Thursday, 10 July 2014 11:46:35 UTC+5:30, Bharath Vissapragada wrote:

 Did you compile Hbase-0.94.8 with 2.4 ? Check this page[1]

 [1] http://hbase.apache.org/configuration.html#hadoop


 On Thu, Jul 10, 2014 at 11:34 AM, Prashasti Agrawal 
 prashast...@gmail.com wrote:

 I am trying to run Hbase in pseudo-distributed mode. I am using
 Hbase-0.94.8 and Hadoop 2.4.0

 The Hmaster, regionservers start but, when i try to view them in UI, I
 get this error:

 Problem accessing /master-status. Reason:

 
 org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;

 Caused by:

 java.lang.NoSuchMethodError: 
 org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
at 
 org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:437)
at 
 org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:712)
at org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:126)
at 
 org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:56)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

  --

 ---
 You received this message because you are subscribed to the Google
 Groups CDH Users group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to cdh-user+u...@cloudera.org.
 For more options, visit https://groups.google.com/a/
 cloudera.org/d/optout.




 --
 Bharath Vissapragada
 http://www.cloudera.com

  --

 ---
 You received this message because you are subscribed to the Google Groups
 CDH Users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to cdh-user+unsubscr...@cloudera.org.
 For more options, visit https://groups.google.com/a/cloudera.org/d/optout.




-- 
Bharath Vissapragada
http://www.cloudera.com


Re: error during Hbase running

2014-07-10 Thread Ted Yu
Prashasti:
In you mvn command I noticed that assembly:single was missing. 
Is this a typo ?

Btw there is no hadoop-core jar for Hadoop 2.4.0

Cheers

On Jul 10, 2014, at 2:16 AM, Bharath Vissapragada bhara...@cloudera.com wrote:

 Hey,
 
 +user@hbase
 
 Make sure you are placing 2.4 version hadoop jars in hbase/lib folder so
 that HBase picks correct jars during runtime. This needs to be done across
 all the nodes in the cluster.
 
 - Bharath
 
 
 On Thu, Jul 10, 2014 at 1:57 PM, Prashasti Agrawal 
 prashasti.iit...@gmail.com wrote:
 
 Do I need to add some jar files in hbase lib folder? (I read on some
 blogs) but I cannot find the hadoop-core-2.4.0.jar in my hadoop_home
 
 
 On Thursday, 10 July 2014 13:29:11 UTC+5:30, Prashasti Agrawal wrote:
 
 Hi,
 I followed the changes mentioned in the site. I made the desired changes
 in the pom file (replacind 2.2 with 2.4 everywhere) and built the project
 with mavin using   mvn clean install -Dhadoop.profile=2.0 -DskipTests. But
 the error still shows up the same.
 
 On Thursday, 10 July 2014 11:46:35 UTC+5:30, Bharath Vissapragada wrote:
 
 Did you compile Hbase-0.94.8 with 2.4 ? Check this page[1]
 
 [1] http://hbase.apache.org/configuration.html#hadoop
 
 
 On Thu, Jul 10, 2014 at 11:34 AM, Prashasti Agrawal 
 prashast...@gmail.com wrote:
 
 I am trying to run Hbase in pseudo-distributed mode. I am using
 Hbase-0.94.8 and Hadoop 2.4.0
 
 The Hmaster, regionservers start but, when i try to view them in UI, I
 get this error:
 
 Problem accessing /master-status. Reason:
 

 org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
 
 Caused by:
 
 java.lang.NoSuchMethodError: 
 org.apache.hadoop.net.NetUtils.getInputStream(Ljava/net/Socket;)Lorg/apache/hadoop/net/SocketInputWrapper;
at 
 org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:437)
at 
 org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:138)
at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:712)
at 
 org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:126)
at 
 org.apache.hadoop.hbase.master.MasterStatusServlet.doGet(MasterStatusServlet.java:56)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
 org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 
 --
 
 ---
 You received this message because you are subscribed to the Google
 Groups CDH Users group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to cdh-user+u...@cloudera.org.
 For more options, visit https://groups.google.com/a/
 cloudera.org/d/optout.
 
 
 
 --
 Bharath Vissapragada
 http://www.cloudera.com
 --
 
 ---
 You received this message because you are subscribed to the Google Groups
 CDH Users group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to cdh-user+unsubscr...@cloudera.org.
 For more options, visit https://groups.google.com/a/cloudera.org/d/optout.
 
 
 
 -- 
 

Re: Region server not accept connections intermittently

2014-07-10 Thread Rural Hunter

Yes, I can take more if needed when it happens next time.

于 2014/7/10 17:11, Ted Yu 写道:

I noticed the blockSeek() call in HFileReaderV2.

Did you take only one dump during the 20 minute hang ?

Cheers






how to measure IO on HBase

2014-07-10 Thread sahanashankar
Hello,

I have a HBase cluster of 1 Master node and 5 Region Servers deployed on
HDFS. 

What is the best way to measure IO for the operations on cluster such as 

1. Scan
2. Get 
3. Put

I need to conduct the measurement in 2 scenarios. In the first scenario, a
have a simple table (size  1GB), consisting of just one column family. The
second scenario will be slightly more complicated with the 'primary key'
being stored in the memstore or HFile in parts.  





--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/how-to-measure-IO-on-HBase-tp4061207.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: how to measure IO on HBase

2014-07-10 Thread Ted Yu
Please see chapter 17 of refguide. 

Especially http://hbase.apache.org/book.html#ops.monitoring

Cheers

On Jul 10, 2014, at 2:51 AM, sahanashankar sahanashanka...@gmail.com wrote:

 Hello,
 
 I have a HBase cluster of 1 Master node and 5 Region Servers deployed on
 HDFS. 
 
 What is the best way to measure IO for the operations on cluster such as 
 
 1. Scan
 2. Get 
 3. Put
 
 I need to conduct the measurement in 2 scenarios. In the first scenario, a
 have a simple table (size  1GB), consisting of just one column family. The
 second scenario will be slightly more complicated with the 'primary key'
 being stored in the memstore or HFile in parts.  
 
 
 
 
 
 --
 View this message in context: 
 http://apache-hbase.679495.n3.nabble.com/how-to-measure-IO-on-HBase-tp4061207.html
 Sent from the HBase User mailing list archive at Nabble.com.


Need help on RowFilter

2014-07-10 Thread Madabhattula Rajesh Kumar
Hi Team,

Could you please help me to resolve below issue.

In my hbase table, i've a 30 records. I need to retrieve records based on
list of rowkeys. I'm using below code base. It is not giving records

HTable table = new HTable(configuration, tableName);
ListFilter filters = new ArrayListFilter();
Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
BinaryPrefixComparator(Bytes.toBytes(rowkey)));
filters.add(rowFilter);

Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
BinaryPrefixComparator(Bytes.toBytes(rowkey1)));
filters.add(rowFilter);

FilterList fl = new FilterList(filters);

Scan s = new Scan();
s.setFilter(fl);
ResultScanner ss = table.getScanner(s);
{
 for(KeyValue kv : r.raw())
  {
System.out.print(new String(kv.getRow()) +  );
  }
}

Thank you for support

Regards,
Rajesh


Re: Need help on RowFilter

2014-07-10 Thread Ian Brooks
HI Rajesh,

If you know the rowkeys already, you don't need to perform a scan, you can just 
perform a get on the list of rowkeys

e.g.


ListGet RowKeyList = new ArrayListGet();

# for each rowkey
  RowKeyList.add(new Get(Bytes.toBytes(rowkey)));

Result[] results = table.get(RowKeyList);

for (Result r : results) {
  for(KeyValue kv : r.raw()) {
 System.out.print(new String(kv.getRow()) +  );
  }
}

-Ian Brooks

On Thursday 10 Jul 2014 16:38:04 Madabhattula Rajesh Kumar wrote:
 Hi Team,
 
 Could you please help me to resolve below issue.
 
 In my hbase table, i've a 30 records. I need to retrieve records based on
 list of rowkeys. I'm using below code base. It is not giving records
 
 HTable table = new HTable(configuration, tableName);
 ListFilter filters = new ArrayListFilter();
 Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
 BinaryPrefixComparator(Bytes.toBytes(rowkey)));
 filters.add(rowFilter);
 
 Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
 BinaryPrefixComparator(Bytes.toBytes(rowkey1)));
 filters.add(rowFilter);
 
 FilterList fl = new FilterList(filters);
 
 Scan s = new Scan();
 s.setFilter(fl);
 ResultScanner ss = table.getScanner(s);
 {
  for(KeyValue kv : r.raw())
   {
 System.out.print(new String(kv.getRow()) +  );
   }
 }
 
 Thank you for support
 
 Regards,
 Rajesh


Re: Need help on RowFilter

2014-07-10 Thread Madabhattula Rajesh Kumar
Hi Ian,

Thank you very much of the solution. Could you please explain at what are
the use cases we need to use RowFilter?

Regards,
Rajesh


On Thu, Jul 10, 2014 at 4:48 PM, Ian Brooks i.bro...@sensewhere.com wrote:

 HI Rajesh,

 If you know the rowkeys already, you don't need to perform a scan, you can
 just perform a get on the list of rowkeys

 e.g.


 ListGet RowKeyList = new ArrayListGet();

 # for each rowkey
   RowKeyList.add(new Get(Bytes.toBytes(rowkey)));

 Result[] results = table.get(RowKeyList);

 for (Result r : results) {
   for(KeyValue kv : r.raw()) {
  System.out.print(new String(kv.getRow()) +  );
   }
 }

 -Ian Brooks

 On Thursday 10 Jul 2014 16:38:04 Madabhattula Rajesh Kumar wrote:
  Hi Team,
 
  Could you please help me to resolve below issue.
 
  In my hbase table, i've a 30 records. I need to retrieve records based on
  list of rowkeys. I'm using below code base. It is not giving records
 
  HTable table = new HTable(configuration, tableName);
  ListFilter filters = new ArrayListFilter();
  Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
  BinaryPrefixComparator(Bytes.toBytes(rowkey)));
  filters.add(rowFilter);
 
  Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
  BinaryPrefixComparator(Bytes.toBytes(rowkey1)));
  filters.add(rowFilter);
 
  FilterList fl = new FilterList(filters);
 
  Scan s = new Scan();
  s.setFilter(fl);
  ResultScanner ss = table.getScanner(s);
  {
   for(KeyValue kv : r.raw())
{
  System.out.print(new String(kv.getRow()) +  );
}
  }
 
  Thank you for support
 
  Regards,
  Rajesh



Re: Need help on RowFilter

2014-07-10 Thread Ian Brooks
Hi Rajesh

From personal use, the rowFilter allows for finer grained results when 
performing a large scan where the row keys don't exaclty match your criteria. 
For example if you use start and end rows to constrain your scan, the results 
may contain some results that you don't want and you can use the row Prefix 
filter to only get the ones you want. 

In the setup im using start and end rows are the fastest way to get to the 
segments of data I need within hbase. The row filter is then used to 
clean/restict the data, think of it like the HAVING clause in SQL if you are 
used to that, It happens more in post processing of the result set.

Thats my understanding of how it should be used, others may have different 
feedback on this.

-Ian Brooks


On Thursday 10 Jul 2014 17:08:58 Madabhattula Rajesh Kumar wrote:
 Hi Ian,
 
 Thank you very much of the solution. Could you please explain at what are
 the use cases we need to use RowFilter?
 
 Regards,
 Rajesh
 
 
 On Thu, Jul 10, 2014 at 4:48 PM, Ian Brooks i.bro...@sensewhere.com wrote:
 
  HI Rajesh,
 
  If you know the rowkeys already, you don't need to perform a scan, you can
  just perform a get on the list of rowkeys
 
  e.g.
 
 
  ListGet RowKeyList = new ArrayListGet();
 
  # for each rowkey
RowKeyList.add(new Get(Bytes.toBytes(rowkey)));
 
  Result[] results = table.get(RowKeyList);
 
  for (Result r : results) {
for(KeyValue kv : r.raw()) {
   System.out.print(new String(kv.getRow()) +  );
}
  }
 
  -Ian Brooks
 
  On Thursday 10 Jul 2014 16:38:04 Madabhattula Rajesh Kumar wrote:
   Hi Team,
  
   Could you please help me to resolve below issue.
  
   In my hbase table, i've a 30 records. I need to retrieve records based on
   list of rowkeys. I'm using below code base. It is not giving records
  
   HTable table = new HTable(configuration, tableName);
   ListFilter filters = new ArrayListFilter();
   Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
   BinaryPrefixComparator(Bytes.toBytes(rowkey)));
   filters.add(rowFilter);
  
   Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
   BinaryPrefixComparator(Bytes.toBytes(rowkey1)));
   filters.add(rowFilter);
  
   FilterList fl = new FilterList(filters);
  
   Scan s = new Scan();
   s.setFilter(fl);
   ResultScanner ss = table.getScanner(s);
   {
for(KeyValue kv : r.raw())
 {
   System.out.print(new String(kv.getRow()) +  );
 }
   }
  
   Thank you for support
  
   Regards,
   Rajesh
 


Re: Need help on RowFilter

2014-07-10 Thread Madabhattula Rajesh Kumar
Hi Ian,

Thank you very much

Regards,
Rajesh


On Thu, Jul 10, 2014 at 5:16 PM, Ian Brooks i.bro...@sensewhere.com wrote:

 Hi Rajesh

 From personal use, the rowFilter allows for finer grained results when
 performing a large scan where the row keys don't exaclty match your
 criteria. For example if you use start and end rows to constrain your scan,
 the results may contain some results that you don't want and you can use
 the row Prefix filter to only get the ones you want.

 In the setup im using start and end rows are the fastest way to get to the
 segments of data I need within hbase. The row filter is then used to
 clean/restict the data, think of it like the HAVING clause in SQL if you
 are used to that, It happens more in post processing of the result set.

 Thats my understanding of how it should be used, others may have different
 feedback on this.

 -Ian Brooks


 On Thursday 10 Jul 2014 17:08:58 Madabhattula Rajesh Kumar wrote:
  Hi Ian,
 
  Thank you very much of the solution. Could you please explain at what are
  the use cases we need to use RowFilter?
 
  Regards,
  Rajesh
 
 
  On Thu, Jul 10, 2014 at 4:48 PM, Ian Brooks i.bro...@sensewhere.com
 wrote:
 
   HI Rajesh,
  
   If you know the rowkeys already, you don't need to perform a scan, you
 can
   just perform a get on the list of rowkeys
  
   e.g.
  
  
   ListGet RowKeyList = new ArrayListGet();
  
   # for each rowkey
 RowKeyList.add(new Get(Bytes.toBytes(rowkey)));
  
   Result[] results = table.get(RowKeyList);
  
   for (Result r : results) {
 for(KeyValue kv : r.raw()) {
System.out.print(new String(kv.getRow()) +  );
 }
   }
  
   -Ian Brooks
  
   On Thursday 10 Jul 2014 16:38:04 Madabhattula Rajesh Kumar wrote:
Hi Team,
   
Could you please help me to resolve below issue.
   
In my hbase table, i've a 30 records. I need to retrieve records
 based on
list of rowkeys. I'm using below code base. It is not giving records
   
HTable table = new HTable(configuration, tableName);
ListFilter filters = new ArrayListFilter();
Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
BinaryPrefixComparator(Bytes.toBytes(rowkey)));
filters.add(rowFilter);
   
Filter rowFilter=new RowFilter(CompareFilter.CompareOp.EQUAL, new
BinaryPrefixComparator(Bytes.toBytes(rowkey1)));
filters.add(rowFilter);
   
FilterList fl = new FilterList(filters);
   
Scan s = new Scan();
s.setFilter(fl);
ResultScanner ss = table.getScanner(s);
{
 for(KeyValue kv : r.raw())
  {
System.out.print(new String(kv.getRow()) +  );
  }
}
   
Thank you for support
   
Regards,
Rajesh
  



Re: Need help on RowFilter

2014-07-10 Thread Mingtao Zhang
Hi Rajesh,

It looks like the filter list combines filters with 'AND' operator. 

In your case you had two 'equal' rowfilters and expected each return a result.

Mingtao Sent from iPhone

 On Jul 10, 2014, at 7:08 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com 
 wrote:
 
 Rajesh


Re: reply: Why PageFilter returning less rows than it should be?

2014-07-10 Thread Ted Yu
somefilter is added to flo and fla.

Are they the same instance of some filter class ?

Cheers


On Wed, Jul 9, 2014 at 10:29 PM, Michael Calvin 77231...@qq.com wrote:

 I'm sorry that was an unfinished mail. The proper one comes next.
 But I believe it contains the essential: FilterLists inside a FilterList.
 Those code I didn't write is simply getting the scanner and count how many
 results it has.





 Here's the proper mail:
  Here's my code:
  Scan sc = new Scan();
  sc.addFamily(c);
  FilterList flo=new FilterList(Operator.MUST_PASS_ONE);
  flo.addFilter(somefilter);‍
  ‍FilterList fla=new FilterList(Operator.MUST_PASS_ALL);
  fla.addFilter(somefilter);‍
  ‍FilterList fl=new FilterList(Operator.MUST_PASS_ALL);
  fl.addFilter(new PageFilter(100));‍
  fl.addFilter(fla);
  fl.addFilter(flo);
  sc.setFilter(fl);
 
  There are 1480 rows without PageFilter, but only 7 return with it

 --
 Michael.Calvin.Shi‍



 -- 原始邮件 --
 发件人: Ted Yu;yuzhih...@gmail.com;
 发送时间: 2014年7月10日(星期四) 中午11:35
 收件人: user@hbase.apache.orguser@hbase.apache.org;
 抄送: useruser@hbase.apache.org;
 主题: Re: Why PageFilter returning less rows than it should be?



 Can you formulate the code snippet as a unit test ?

 It would be much easier to understand / debug that way.

 Thanks


incremental cluster backup using snapshots

2014-07-10 Thread oc tsdb
Hi,

Does new hbase version(0.99) supports incremental backup using snapshots?

If not supported in current releases, is it planned to add in future
releases?

can we export snapshots to local file system directly?


Thanks
oc.tsdb