Hi,
We have done this kind of thing using HBase 0.92.1 + Pig, but we
finally had to limit the size of the tables and move the biggest
data to HDFS: loading data directly from HBase is much slower than
from HDFS, and doing it using M/R overloads HBase region servers,
since several maps jobs sc
Thanks, esteban!
I'v tried. But it did not work.
I first load the customer hbase-site.xml, and then try to check the hbase
server.
So my code is like this:
conf.setInt("hbase.client.retries.number", 1);
conf.setInt("hbase.client.pause", 5);
conf.setInt("ipc.socket.timeout", 5000);
conf.setInt("
http://hbase.apache.org/book.html#hbase_metrics this is what you are
looking for.
Cheers
Ramon
On Wed, Nov 13, 2013 at 1:41 PM, Sandeep L wrote:
> Hi,
> Is there any way to get HBase reqd and write counts separately.HBase
> default web interface will show by combining read and write requests as
Keep a separate Hadoop cluster only focus on analyse is a better way to go,
the HBase cluster is only for collecting data. You can use distcp to copy
data between the two cluster which is faster, your Hadoop task has to parse
the HFile format for reading data which can be done but need some coding,
Hi,
We do something like that programmatically.
Read blobbed HBase data (qualifiers represent cross-sections such as
country_product and blob data such as clicks, impressions etc.)
We have several aggregation tasks (one per MySQL table) that aggregates the
data and inserts (in batches) to MySQL.
I
Hi
I write to a LOG file the line 'processed' each time, I am entering a new
unique object into my table .
When I am counting the number of 'processed' in my LOG , I find a number
superior to the number of lines returned by the command 'count' of the
shell.
Secondly, it can occur that calling twi
Hi Folks
I added COMPRESSION value after creating a table, so here is the table
description:
{NAME => 'SPLIT_TEST_BIG', SPLIT_POLICY =>
'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy',
MAX_FILESIZE => '107374182400', FAMILIES => [{NAME => 'f1', COMPRESSION =>
'SNAPPY'}]}
My
Hi All,
My use case is to solution for HBase Disaster recovery. As Replication
is a HBase
Disaster recovery solution. I am using master-slave mode of replication.
I have two separate HBase clusters and separate zookeeper for each cluster:
Zookeeper for first hbase cluster is : zoo1:2181
Zookeeper
Create another table with the same schema without the compression, insert
the same thing in the two tables and compare the foot print?
Le 2013-11-13 05:58, "Jia Wang" a écrit :
> Hi Folks
>
> I added COMPRESSION value after creating a table, so here is the table
> description:
>
> {NAME => 'SPLI
Good day -
I'm an hadoop & hbase newbie, so please excuse me if this is a known issue -
hoping someone might send me a simple fix !
I installed the latest stable tarball : hbase-0.94.13.tar.gz , and followed the
instructions at
docs/book/quickstart.html .
(After installing hadoop-2.2.0, and
How big was the difference between count in log and count from shell ?
Which HBase version are you using ?
Thanks
On Nov 13, 2013, at 2:23 AM, Sznajder ForMailingList
wrote:
> Hi
>
> I write to a LOG file the line 'processed' each time, I am entering a new
> unique object into my table .
> W
Your hbase.rootdir config parameter points to file: instead of hdfs:
Where is hadoop-2.2.0 running ?
You also need to build tar ball using hadoop 2 profile. See the following
in pom.xml:
profile for building against Hadoop 2.0.0-alpha. Activate using:
mvn -Dhadoop.profile=2.0
--
More of the log and the version of HBase involved please. Thanks.
St.Ack
On Wed, Nov 13, 2013 at 1:07 AM, jingych wrote:
> Thanks, esteban!
>
> I'v tried. But it did not work.
>
> I first load the customer hbase-site.xml, and then try to check the hbase
> server.
> So my code is like this:
>
Hanish,
I guess you are looking for hbase to automatically switch to the 2nd
cluster. Unfortunately, that is not the architecture design of replication.
And I think many of such design will rely on application layer for such
'smart' switch. for your application, once 'zoo1:2181' hit a
MasterNotRun
Hi Guys:
I've HBase running on a machine with multiple interfaces (for in-bound and
inter-cluster traffic). I can't get HBase to listen on the interface I want it
to by setting hbase.regionserver.dns.interface. Does this setting work? What is
its purpose?
After reading an old thread on the use
Thank you! I really appreciate you answers because I learned many new things
about hbase during last two days. It looks that my jobtracker is not
listening at port 50030, but I noticed there are other ports 60010 and 60030
http://ir.lmcloud.vse.cz:60010/master.jsp
http://ir.lmcloud.vse.cz:60030/re
jingych,
That timeout comes from ZooKeeper, are you running ZK on the same node you
are running the HBase Master? If your environment requires to fail fast
even for ZK connection timeouts then you need to reduce
zookeeper.recovery.retry.intervalmill and zookeeper.recovery.retry since
the retries a
Hi all,
I am thinking about using Random Forest to do churn analysis with Hbase as
NoSQL data store.
Currently, we have all the user history (basically many type of event data)
resides in S3 & Redshift (we have one table per date/per event)
Events includes startTime, endTime, and other pertine
Thanks, Esteban and Stack!
As Esteban said, the problem was solved.
My config is below:
conf.setInt("hbase.client.retries.number", 1);
conf.setInt("zookeeper.session.timeout", 5000);
conf.setInt("zookeeper.recovery.retry", 1);
conf.setInt("zookeeper.recovery.retry.intervalmill", 50);
But it st
Hi,
Is there anything I missed for my copyTable command?
hbase@node3:~/hbase-0.94.12$ bin/hbase
org.apache.hadoop.hbase.mapreduce.CopyTable --endtime=1384395265000
--peer.adr=hbasetest1:2181:/hbase dns
13/11/13 21:28:37 INFO zookeeper.RecoverableZooKeeper: The identifier of
this process is 31738@
Is it possible to get from api instead of hbase_metrics.
Thanks,Sandeep.
> Date: Wed, 13 Nov 2013 17:07:00 +0800
> Subject: Re: Get HBase Read and Write requests per second separately
> From: ra...@appannie.com
> To: user@hbase.apache.org
>
> http://hbase.apache.org/book.html#hbase_metrics this
Dear hbase
I apply add to hbase group
jingych,
inline:
On Wed, Nov 13, 2013 at 7:06 PM, jingych wrote:
> Thanks, Esteban and Stack!
>
> As Esteban said, the problem was solved.
>
> My config is below:
>
> conf.setInt("hbase.client.retries.number", 1);
> conf.setInt("zookeeper.session.timeout", 5000);
> conf.setInt("zookeeper.rec
23 matches
Mail list logo