Hi,
I'm using Hbase 0.98.4.2.2.0.0-2041-hadoop2 running on 9 nodes. My table
distributed to 12 regions and contains about 113M records.
I'm running pagination query using
/Filter pageFilter = new PageFilter(pageSize);
Scan scan = new Scan();
RegexStringComparator comp = new
Spark supports creating RDDs using Hadoop input and output formats (
https://spark.apache.org/docs/1.2.1/api/scala/index.html#org.apache.spark.rdd.HadoopRDD)
. You can use our TableInputFormat (
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableInputFormat.html)
or
Have a look at the versions of TableMapReduceUtil#initTableMapperJob that
take a ListScan instances. Does that provide what you're looking for?
-n
On Wed, Mar 4, 2015 at 6:05 AM, Dave Latham lat...@davelink.net wrote:
That's not possible with HBase today. The simplest thing may be to set
Your best bet is to look at the examples provided in the hbase-examples
module, f.e.
https://github.com/apache/hbase/blob/branch-1.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.java
or
So after removing all the replication peers hbase still doesn't want to
clean up the oldWALs folder. In the master logs I don't see any errors
from ReplicationLogCleaner or LogCleaner. I have my logging set to INFO so
I'd think I would see something.
Is there anyway to run the
Hi, experts.
I am studying how to program an Endpoint.
The material I have is a book HBase The Definitive Guide(3rd). I also read
the blog https://blogs.apache.org/hbase/entry/coprocessor_introduction‍;. But
it seems that the Endpoint has been changed to use ProtoBuf tecknique. So I
feel the
It's going to be fairly difficult imho.
What you need to look at is region. Tables are split in regions. Regions
are allocated to region server (i.e. an hbase node). Reads and writes are
directed to the region server owning the region. Regions can move from one
region server to another, that's the
-dev
+user
Hi,
Please try upload the logs from this RegionServer to pastebin.com or
gist.github.com so we can look into that. Also which version of HBase are
you using?
cheers,
esteban.
--
Cloudera, Inc.
On Tue, Mar 3, 2015 at 11:02 PM, shivanandpawar shivanand.pa...@gmail.com
wrote:
We
Hi Nicolas,
Thank you for your explanation. I understand the issue here, it's as I
suspected - the client Java API is not privy to the region operations. I'll
look at alternative solutions.
Gokul.
On 4 March 2015 at 14:05, Nicolas Liochon nkey...@gmail.com wrote:
It's going to be fairly
That's not possible with HBase today. The simplest thing may be to set
your Scan time range to include both today's and yesterday's data and then
filter down to only the data you want inside your map task. Other
possibilities would be creating a custom filter to do the filtering on the
server
If I understand the issue correctly, restarting the master should solve the
problem.
On Wed, Mar 4, 2015 at 5:55 AM, Ted Yu yuzhih...@gmail.com wrote:
Please see HBASE-13067 Fix caching of stubs to allow IP address changes of
restarted remote servers
Cheers
On Tue, Mar 3, 2015 at 8:26 PM,
11 matches
Mail list logo