bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter
[ ...]
*Thanks & Regards*
∞
Shashwat Shriparv
On Wed, May 29, 2013 at 10:29 AM, rajeshbabu chintaguntla <
rajeshbabu.chintagun...@huawei.com> wrote:
> If you enable aggregation coprocessors you can use aggreation client which
> is m
Hello,
I'm setting up a jenkins job for hbase, building branch 0.95 (from github)
under jdk 6.
However sometimes the build pasts, sometimes does not. Several tests are
likely to fail, such as:
org.apache.hadoop.hbase.ipc.TestDelayedRpc.testDelayedRpcImmediateReturnValue
org.apache.hadoop.hbase.i
If you enable aggregation coprocessors you can use aggreation client which is
more faster than normal count from shell.
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.html#rowCount(byte[],
org.apache.hadoop.hbase.coprocessor.ColumnInterpreter,
org.a
Stack & Enis:
Your responses clear my thoughts. Thanks.
On Wed, May 29, 2013 at 6:19 AM, Enis Söztutar wrote:
> Hi,
>
> HDFS has two interfaces for durability: hflush and hsync:
>
> Hflush() : Flush the data packet down the datanode pipeline. Wait for
> ack’s.
> Hsync() : Flush the data packet
Another option is Phoenix (https://github.com/forcedotcom/phoenix),
where you'd do
SELECT count(*) FROM my_table
Regards,
James
On 05/28/2013 03:25 PM, Ted Yu wrote:
Take a look at http://hbase.apache.org/book.html#rowcounter
Cheers
On Tue, May 28, 2013 at 3:23 PM, Shahab Yunus wrote:
Take a look at http://hbase.apache.org/book.html#rowcounter
Cheers
On Tue, May 28, 2013 at 3:23 PM, Shahab Yunus wrote:
> Is there a faster way to get the count of rows in a HBase table
> (potentially a huge one)? I am looking for ways other than the 'count'
> shell command or any Pig script? Th
Is there a faster way to get the count of rows in a HBase table
(potentially a huge one)? I am looking for ways other than the 'count'
shell command or any Pig script? Thanks.
Regards,
Shahab
Hi,
HDFS has two interfaces for durability: hflush and hsync:
Hflush() : Flush the data packet down the datanode pipeline. Wait for
ack’s.
Hsync() : Flush the data packet down the pipeline. Have datanodes execute
FSYNC equivalent. Wait for ack’s.
There is some work on adding a Durability API in
Here's my config. This is for a different file system so it might provide
some insight compared with the examples online (
http://hbase.apache.org/book/example_config.html)
You need the fully qualified name for the hbase.rootdir.
hbase.master
Or, even better, if someone has a sample hbase-site.xml file
that I can see, that would be even better :) .
On Tue, May 28, 2013 at 6:05 PM, Yves S. Garret
wrote:
> Ok, I finally got HBase to work. With the current networking configs
> and then just tried to run it with the default configs. I
Ok, I finally got HBase to work. With the current networking configs
and then just tried to run it with the default configs. I didn't change the
path to the directory, which I think I was messing up in hbase-site.xml.
One more question, how should the syntax look like in this thing? If I
want t
hi, folks,
I am wonderring how to get an overview of the replication of all tables of
a cluster.
For example, I can get individual table/columnfamily info by describing the
table
hbase(main):003:0> describe 'usertable'
DESCRIPTION
ENABLED
{NAME => 'usertable', FAMILIES => [{NAME => 'family
true
we apologize if you receive multiple copies of this message
===
CALL FOR PAPERS
2013 Workshop on
Middleware for HPC and Big Data Systems
MHPC '13
as part of Euro-Par 2013, Aachen, Germany
Ok, it's obvious that this is a networking issue. I'm running on CentOS
and the hostname file is not in /etc, it's located in /etc/sysconfig/network
instead.
This is how that file looks like at the moment:
NETWORKING=yes
HOSTNAME=ysg.connect
/etc/hosts is like this:
127.0.0.1 localhost ysg.c
Just curious, but what's zookeeper.sh in the bin directory of HBase?
On Fri, May 24, 2013 at 10:27 PM, Jay Vyas wrote:
> Yes that's a great post it helped me appreciate the complexity of the
> whole thing to. There's gotta be a JIRA in here somewhere :)
>
> Sent from my iPhone
>
> On May 24, 20
On Tue, May 28, 2013 at 7:09 AM, jingguo yao wrote:
> Section 2.1.3 says that Hadoop 1.0.4 works with HBase-0.94.x [1]. And
> Section 2.1.3.3 says that 1.0.4 has a working durable sync. But when I
> check the source code of DFSClient.DFSOutputStream's sync method, I
> finds the following javadoc:
Over in HBASE-8626, Jean-Marc and Andrew voiced the opinion that the
current behavior may not be a bug.
Vinod:
What do you think ?
On Sat, May 25, 2013 at 4:53 PM, Vinod V wrote:
> I have a HBase table with a single column family and columns are added to
> it over time. These columns are named
Hi Tian-Ying,
I don't think there is already a JIRA for that. The idea was to open a new
one and ask for HBCK to be able to fix that. Can you do that?
Do you have an easy way to reproduce your issue? Like by manually creating
files in HDFS or something like that?
JM
2013/5/22 Tianying Chang
>
Section 2.1.3 says that Hadoop 1.0.4 works with HBase-0.94.x [1]. And
Section 2.1.3.3 says that 1.0.4 has a working durable sync. But when I
check the source code of DFSClient.DFSOutputStream's sync method, I
finds the following javadoc:
/**
* All data is written out to datanodes. It is n
19 matches
Mail list logo