Hi Asaf,
This CDC pattern will be used for directing changes to another system,
Assume I have a table "hbase_alarms" in hbase with columns
"Severity,Source,Time" and tracking changes with this CDC tool. Some
external system is putting alarms with their severity and source to
hbase_alarms table .
This should be a big patch, HBase need to provide streaming access, such as
HTableInputStream, HTableOutputStream etc.
On Tue, Jun 4, 2013 at 12:12 PM, Asaf Mesika wrote:
> I guess one can hack opening a socket from a Coprocessor Endpoint and push
> its scanned data, thus achieving a stream.
>
Better approach would be to break the data in chunks and create a behaviour
similar to indirect blocks.
On Mon, Jun 3, 2013 at 9:12 PM, Asaf Mesika wrote:
> I guess one can hack opening a socket from a Coprocessor Endpoint and push
> its scanned data, thus achieving a stream.
>
>
> On Sun, Jun 2
I guess one can hack opening a socket from a Coprocessor Endpoint and push
its scanned data, thus achieving a stream.
On Sun, Jun 2, 2013 at 12:42 AM, Stack wrote:
> Yeah, no streaming API in our current client (nor does our thrift client
> give you a streaming API).
> St.Ack
>
>
> On Sat, Jun
What's wrong with HBase native Master Slave replicate, or am I missing
something here?
On Mon, Jun 3, 2013 at 12:16 PM, yavuz gokirmak wrote:
> Hi all,
>
> Currently we are working on a hbase change data capture (CDC) tool. I want
> to share our ideas and continue development according to your
Need to update this badly. But in a quick pinch, this will load balance one
table.
https://github.com/phobos182/hadoop-hbase-tools/blob/master/hbase/hbase_table_balancer.rb
On Mon, Jun 3, 2013 at 7:50 PM, Azuryy Yu wrote:
> Thanks, Ted. I am using 0.94.7 now.
>
>
> On Tue, Jun 4, 2013 at 10:
Thanks, Ted. I am using 0.94.7 now.
On Tue, Jun 4, 2013 at 10:04 AM, Ted Yu wrote:
> bq. I expect load balancer work at table level
>
> In 0.94, load balancer performs per-table load balancing.
>
> On Mon, Jun 3, 2013 at 6:48 PM, Azuryy Yu wrote:
>
> > Hi JM,
> > Thanks. actually I expect load
bq. I expect load balancer work at table level
In 0.94, load balancer performs per-table load balancing.
On Mon, Jun 3, 2013 at 6:48 PM, Azuryy Yu wrote:
> Hi JM,
> Thanks. actually I expect load balancer work at table level, not region
> level. does that possible here?
>
>
> On Tue, Jun 4, 201
Thanks JM, I will read balancer code to get more.
On Tue, Jun 4, 2013 at 9:50 AM, Jean-Marc Spaggiari wrote:
> Take a look into the balancer code.
>
> It's called for each table. So if it's not a table you want to
> balance, just return an empty plan...
>
> JM
>
> 2013/6/3 Azuryy Yu :
> > Hi JM
Take a look into the balancer code.
It's called for each table. So if it's not a table you want to
balance, just return an empty plan...
JM
2013/6/3 Azuryy Yu :
> Hi JM,
> Thanks. actually I expect load balancer work at table level, not region
> level. does that possible here?
>
>
> On Tue, Jun
Hi JM,
Thanks. actually I expect load balancer work at table level, not region
level. does that possible here?
On Tue, Jun 4, 2013 at 9:41 AM, Jean-Marc Spaggiari wrote:
> Hi Azuryy,
>
> I don't think this is possible but you ca duplicate the default HBase
> balancer and modify it so it just ba
Hi Azuryy,
I don't think this is possible but you ca duplicate the default HBase
balancer and modify it so it just balance one table..
But what do you want to only balance one?
JM
2013/6/3 Azuryy Yu :
> Hi,
>
> Does anybody can tell me how to configure load balancer to balance just one
> table.
Hi,
Does anybody can tell me how to configure load balancer to balance just one
table.
Thank.
Thanks Amit
In my envionment, I run a dozens of client to read about 5-20K data per scan
concurrently, And the average read latency for cached data is around 5-20ms.
So it seems there must be something wrong with my cluster env or application.
Or did you run that with multiple client?
>Depends
If you have simply downloaded and extracted HBase the same way I did
below, then it's not coming from HBase since it's working for everyone
else.
There might be something with your installation/configuration.
Do you have any HADOOP_CLASSPATH or HBASE_CLASSPATH declared? Is so,
can you remove it
Hello,
I am trying to run a hadoop job locally through eclipse run configuration for
the data which is on cluster.
I am running into the following error:
[2013-06-03 10:21:49,031] [WARN] [main] org.apache.hadoop.hbase.client.HTable -
This constructor HTable(byte[]) is deprecated and it will be
It's a packaged version, but I have no problems running HIVE and Hadoop
locally using it.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Error-Could-not-find-or-load-main-class-org-apache-hadoop-hbase-util-GetJavaProperty-tp4045637p4045646.html
Sent from the HBase Use
No, it's not sshd related.
Is it a packaged version of the JDK? Or a tar.gz version? If it's a
packaged version, can you try with the tar.gz version? I had issues in
the past with the packaged one...
JM
2013/6/3 zenbowman :
> I'm running Oracle JDK
>
> /
> psamtani@ubuntu:~/dev/hadoop/hbase-0.94
I'm running Oracle JDK
/
psamtani@ubuntu:~/dev/hadoop/hbase-0.94.8$ echo $JAVA_HOME
/usr/lib/jvm/java-7-oracle
/
I thought it might have been that I didn't have sshd running, but even after
installing it the problem persists
/
psamtani@ubuntu:~/dev/hadoop/hbase-0.94.8$ ps -ef | grep ssh
psamtani
There is something wrong with your configuration.
Here is what I get on my side:
#:~/foo$ wget
http://apache.mirror.rafal.ca/hbase/hbase-0.94.8/hbase-0.94.8.tar.gz
--2013-06-03 13:41:30--
http://apache.mirror.rafal.ca/hbase/hbase-0.94.8/hbase-0.94.8.tar.gz
Resolving apache.mirror.rafal.ca (apach
Still the same, thanks for your help though
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Error-Could-not-find-or-load-main-class-org-apache-hadoop-hbase-util-GetJavaProperty-tp4045637p4045641.html
Sent from the HBase User mailing list archive at Nabble.com.
Can you try to start it right after the extraction? Don't even modify anything.
Extract, export JAVA_HOME, run.
If this is working, we will take a look at why you configuration below
is not working.
You can also update the JAVA_HOME in hbase-env.sh.
JM
2013/6/3 zenbowman :
> Hi Jean-Marc,
>
>
Hi Jean-Marc,
I downloaded the tarball, extracted it, modified the conf/hbase-site.xml as
follows:
/
hbase.rootdir
/home/psamtani/dev/hadoop/hbase-data/hbase
hbase.zookeeper.property.dataDir
/home/psamtani/dev/hadoop/hbase-data/zookeeper
/
And then tried to start
Hi,
HBase should be working with JDK 7, however it's so far only
supporting JDK 6. The issue you are facing below is mode probably not
JDK related. Can you describe the steps you followed?
JM
2013/6/3 zenbowman :
> I downloaded the latest stable hbase 0.94.8 and followed the instructions for
> g
I downloaded the latest stable hbase 0.94.8 and followed the instructions for
getting started.
However, when I try to run hbase I get the following.
/ubuntu:~/dev/hadoop/hbase-0.94.8$ ./bin/start-hbase.sh Error: Could not
find or load main class org.apache.hadoop.hbase.util.GetJavaProperty Error:
Hi Yong,
is it possible to share the paper?
regards.
yavuz
On 3 June 2013 12:41, yonghu wrote:
> Hello,
>
> I have presented 5 CDC approaches based on HBase and published my results
> in adbis 2013.
>
> regards!
>
> Yong
>
>
> On Mon, Jun 3, 2013 at 11:16 AM, yavuz gokirmak
> wrote:
>
> > H
Hello,
I have presented 5 CDC approaches based on HBase and published my results
in adbis 2013.
regards!
Yong
On Mon, Jun 3, 2013 at 11:16 AM, yavuz gokirmak wrote:
> Hi all,
>
> Currently we are working on a hbase change data capture (CDC) tool. I want
> to share our ideas and continue deve
Depends on so much environment related variables and on data as well.
But to give you a number after all:
One of our clusters is on EC2, 6 RS, on m1.xlarge machines (network
performance 'high' according to aws), with 90% of the time we do reads; our
avg data size is 2K, block cache at 20K, 100 rows
HBase doesn't know all data are in the block cache. it had to look at
HTable firstly to get "block_id"(tablename + offset), then find it in the
block cache.
so if all data in the block cache, you just avoid to read data from hfile
directly, save some I/O time. but it depends on your data size.
if
Hi all,
Currently we are working on a hbase change data capture (CDC) tool. I want
to share our ideas and continue development according to your feedback.
As you know CDC tools are used for tracking the data changes and take
actions according to these changes[1]. For example in relational
databa
What is that you are observing now?
Regards
Ram
On Mon, Jun 3, 2013 at 2:00 PM, Liu, Raymond wrote:
> Hi
>
> If all the data is already in RS blockcache.
> Then what's the typical scan latency for scan a few rows from a
> say several GB table ( with dozens of regions ) on a sma
Hi
If all the data is already in RS blockcache.
Then what's the typical scan latency for scan a few rows from a say
several GB table ( with dozens of regions ) on a small cluster with say 4 RS ?
A few ms? Tens of ms? Or more?
Best Regards,
Raymond Liu
32 matches
Mail list logo