Subject: What does Apache HBase do?
Good day from Singapore,
I notice that my company/organization uses Apache HBase. What does it do?
Just being curious.
Regards,
Mr. Turritopsis Dohrnii Teo En Ming
Targeted Individual in Singapore
18 May 2022 Wed
biggest timestamp without a full scan, or
at least scan the memstore, the last row must be in memstore.
But there is no such API client can invoke.
Let me think it again.
thanks,
Ming
-Original Message-
From: Josh Elser
Sent: Thursday, July 12, 2018 12:55 AM
To: user@hbase.apache.org
Subject
purpose is to quickly check the last modified timestamp for a given HBase
table.
Thanks,
Ming
used
distro, I can change my code to use new HBase API, which is much more elegant.
Thanks,
Ming
-Original Message-
From: Yu Li
Sent: Saturday, April 21, 2018 6:15 PM
To: Hbase-User
Subject: Re: How to get the HDFS path for a given HBase table?
Maybe HBASE-19858 could help, more
using HBase 1.2.0, so I want to directly use HDFS API to set the storage
policy for a given HBase Table, but I have to know its path.
Ming
-Original Message-
From: Sean Busbey
Sent: Friday, April 20, 2018 8:49 PM
To: user@hbase.apache.org
Subject: Re: How to get the HDFS path for a
a programming way to get that Path string.
Any help will be very appreciated!
Thanks,
Ming
Thank you Anoop for the answer, this is very helpful.
Ming
-Original Message-
From: Anoop John
Sent: Wednesday, April 18, 2018 12:50 AM
To: user@hbase.apache.org
Subject: Re: can we set a table to use a HDFS specific HSM Storage policy?
Oh ya seems yes.. I was under the impression
Hi, Anoop,
In which release this API is supported? From the JIRA
https://issues.apache.org/jira/browse/HBASE-14061, it seems this is only
available in HBase 2.0?
Thanks,
Ming
-Original Message-
From: Anoop John
Sent: Tuesday, April 17, 2018 1:42 PM
To: user@hbase.apache.org
different policy. Pls see
setStoragePolicy(String) API in HColumnDescriptor.
-Anoop-
On Tue, Apr 17, 2018 at 7:16 AM, Ming wrote:
> Hi, all,
>
>
>
> HDFS support HSM, one can set a file or dir storage policy to use different
> hardware disks. I am wondering is there a way in HB
manual, but not find related topics, so asked help here.
Thanks,
Ming
% for them. But read 1 billion
rows is very slow. Is this true?
So is there any other better way to randomly get 1% rows from a given table?
Any idea will be very appreciated.
We don't know the distribution of the 1 billion rows in advance.
Thanks,
Ming
Thank you Anoop,
Is there any rule that we can use to calculate the disk space usage for a given
table to do minor compaction?
Thanks,
Ming
-Original Message-
From: Anoop John [mailto:anoop.hb...@gmail.com]
Sent: Monday, August 28, 2017 2:22 PM
To: user@hbase.apache.org
Subject: Re
on, isn't it? So we don't understand why it is using
so many extra disk space? Anything wrong in our system?
thanks,
Ming
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Saturday, August 26, 2017 9:54 PM
To: user@hbase.apache.org
Subject: Re: [Help] minor compact
minor
compaction is still triggered.
The system is CDH 5.7, HBase is 1.2.
Could anyone help to give us some suggestions? We are really stuck. Thanks in
advance.
Thanks,
Ming
-Original Message-
From: Andrzej [mailto:borucki_andr...@wp.pl]
Sent: Friday, August 25, 2017 11:55 PM
To
The cluster enabled shortCircuitLocalReads.
dfs.client.read.shortcircuit
true
When enabled replication,we found a large number of error logs.
1.shortCircuitLocalReads(fail everytime).
2.Try reading via the datanode on targetAddr(success).
How to make shortCircuitLocalReads successfully w
find example
- prepare() and process().
On Tue, Aug 9, 2016 at 5:04 PM, Liu, Ming (Ming) wrote:
> Thanks Ted for pointing out this. Can this TableLockManager be used
> from a client? I am fine to migrate if this API change for each release.
> I am writing a client application, and need t
Thanks Ted for pointing out this. Can this TableLockManager be used from a
client? I am fine to migrate if this API change for each release.
I am writing a client application, and need to lock a hbase table, if this can
be used directly, that will be super great!
Thanks,
Ming
-Original
n give the tablename
and invoke getLock(), it check the row 0 value in an atomic check and put
operation. So if the 'table lock' is free, anyone should be able to get it I
think.
Maybe I have to study the Zookeeper's distributed lock recipes?
Thanks,
Ming
-Original
period must wait.
I am very new in lock implementation, so I fear there are basic problems in
this 'design'.
So please help me to review if there are any big issues about this idea? Any
help will be very appreciated.
Thanks a lot,
Ming
When enabled replication,we found a large number of error logs.Is the
cluster configuration incorrect?
2016-08-03 10:46:21,721 DEBUG
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Opening
log for replication dn7%2C60020%2C1470136216957.1470192327030 at 16999670
2016-08-03 10:4
. The major goal of encryption for me is when the data
is physically lost, one cannot read it if he cannot get the key. So unless the
NFS and the data disk lost to same person, it is safe. But I should really
start to read about HSM.
Very appreciated of your help.
Ming
-邮件原件-
发件人: Andrew Pu
an acceptable plan?
Thanks,
Ming
-邮件原件-
发件人: Andrew Purtell [mailto:apurt...@apache.org]
发送时间: 2016年6月3日 12:27
收件人: user@hbase.apache.org
抄送: Zhang, Yi (Eason)
主题: Re: 答复: hbase 'transparent encryption' feature is production ready or not?
> We are now confident to use this
Thank you Andrew!
What we hear must be rumor :-) We are now confident to use this feature.
HSM is a good option, I am new to it. But will look at it.
Thanks,
Ming
-邮件原件-
发件人: Andrew Purtell [mailto:apurt...@apache.org]
发送时间: 2016年6月3日 8:59
收件人: user@hbase.apache.org
抄送: Zhang, Yi
t practice about how to manage the key?
Thanks,
Ming
Thanks Frank, this is something I am looking for. Would like to have a try with
it.
Thanks,
Ming
-邮件原件-
发件人: Frank Luo [mailto:j...@merkleinc.com]
发送时间: 2016年4月5日 1:38
收件人: user@hbase.apache.org
抄送: Sumit Nigam
主题: RE: Major compaction
I wrote a small program to do MC in a "
should be reasonable.
For now, 14 mins loading 135G raw data is not bad for me, about 600G/hr at a 10
nodes cluster. Not very good, but acceptable, and I am counting on the
scalability of HBase and MapReduce :-)
Thanks Ted for sharing the info.
Ming
-邮件原件-
发件人: Ted Yu [mailto:yuzhih...@gm
ng 100G or even more data into HBase, and how he/she did it, and what
is the average loading speed. As a developer, I don't have any real project
experience, just do my experiment in our lab. It looks too slow for me, but
maybe that is a normal loading speed... So I want to hear from exp
e has some better ideas to do bulkload in
HBase? or importtsv is already the best tool to do bulkload in HBase world?
If I have real big-data (Say > 50T), this seems not a practical loading speed,
isn't it? Or it is ? In practice, how people load data into HBase normally?
Thanks in advance,
Ming
ta code to further understand and feedback
here if I can have some more findings or any other good method to shared data
in a simple way.
Thanks,
Ming
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Thursday, April 16, 2015 5:16 AM
To: user@hbase.apache.org
Subject:
ot to depend on ZooKeeper too much, no idea why. Is there any other good
way I can use to share data among different coprocessors?
Thanks,
Ming
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, April 15, 2015 8:25 PM
To: user@hbase.apache.org
Subject:
I only have one region for that table during the run. I confirmed by hbase
shell status 'detailed' command.
There is not much example I can find about how to use getSharedData(), could
someone help me here? What is missing in my simple code? Thanks very much in
advance!
Thanks,
Ming
Thanks, Enis,
Your reply is very clear, I finally understand it now.
Best Regards,
Ming
-Original Message-
From: Enis Söztutar [mailto:enis@gmail.com]
Sent: Thursday, February 19, 2015 10:41 AM
To: hbase-user
Subject: Re: HTable or HConnectionManager, how a client connect to HBase
s a good abstraction to control the life
cycle of a connection. I seem to understand now :-)
Thanks,
Ming
-----Original Message-
From: Liu, Ming (HPIT-GADSC)
Sent: Saturday, February 14, 2015 10:45 PM
To: user@hbase.apache.org
Subject: HTable or HConnectionManager, how a client connect to
?
Also as David Chen asked, if all threads share same HConnection, it may has
limitation to support high throughput, so a pool of Connections maybe better?
Thanks,
Ming
-Original Message-
From: Serega Sheypak [mailto:serega.shey...@gmail.com]
Sent: Wednesday, February 04, 2015 1:02 AM
To
correct metrics, it means to me that
two HTable do not share HConnetion even using same 'configuration' in the
constructor. So it confused me more and more
Please someone kindly help me for this newbie question and thanks in advance.
Thanks,
Ming
Thanks Ted!
This is exactly what I need.
This will be a memory copy, but it solves my problem. Hope HBase can provide a
setTimeStamp() method in future release.
Best Regards,
Ming
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, January 21, 2015 11:30 AM
t. Is there any reason it is not provided? I
hope in future release Mutation can expose a setTimestamp() method, is it
possible? If so, my job will get much easier...
Thanks,
Ming
? Or in other words, is the data in an in-memory
CF as safe as in an ordinary CF? No difference?
I could do test myself, but it needs some time, so I would like to be lazy and
ask for help here :) If someone happened to know the answer, thanks in advance!
Thanks,
Ming
Thank you both!
Yes, I can see there is the '.out' file with clear proof of process was
'killed'. So we can prove this issue now!
And it is also true that we must rely on JVM itself for proof that the kill
operation is due to OOM.
Thank you both, this is a very good lea
people can make sure
there is a OOM issue in HBase region server?
Thank you,
Ming
he.org/book.html in section 9.7.7.7.1.1 , but I am not sure.
Happy to know how you solve the issue later. And as Ram and Qiang,Tian
mentioned, you can only 'alleviate' the issue by increasing the knob but if you
give hbase too much pressure, it will not work well sooner or later. Ever
Thank you Bharath,
This is a very helpful reply! I will share the connection between two threads.
Simply put, HTable is not safe for multi-thread, is this true? In
multi-threads, one must use HConnectionManager.
Thanks,
Ming
-Original Message-
From: Bharath Vissapragada [mailto:bhara
Thread t2=new ClientThread1();
t1.start();
t2.start();
}
}
This should be a very basic question, sorry, I really did some search but
cannot find any good explaination. Any help will be very appreciated.
Thanks,
Ming
later. But I hope the data at
least make sensor even on a shared env.
The heap configuration is something I really need to check , thank you.
Best Regards,
Ming
-Original Message-
From: Nick Dimiduk [mailto:ndimi...@gmail.com]
Sent: Saturday, November 22, 2014 5:57 AM
To: user@hbase.apache
: 12288K
NUMA node0 CPU(s): 0,2,4,6,8,10
NUMA node1 CPU(s): 1,3,5,7,9,11
Thanks,
Ming
-Original Message-
From: lars hofhansl [mailto:la...@apache.org]
Sent: Friday, November 21, 2014 4:31 AM
To: user@hbase.apache.org
Subject: Re: how to explain read/write performance change after
Thank you Ted,
It is a great explanation. You are always very helpful ^_^
I will study the link carefully.
Thanks,
Ming
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, November 21, 2014 1:32 AM
To: user@hbase.apache.org
Subject: Re: how to explain read/write
operation or it is possible that write to WAL is still buffered somewhere when
hbase put the data into the memstore?
Reading src code may cost me months, so a kindly reply will help me a lot... ...
Thanks very much!
Best Regards,
Ming
Thank you Andrew, this is an excellent answer, I get it now. I will try your
hbase client for a 'fair' test :-)
Best Regards,
Ming
-Original Message-
From: Andrew Purtell [mailto:apurt...@apache.org]
Sent: Thursday, November 13, 2014 2:08 AM
To: user@hbase.apache.org
Cc: D
kload=com.yahoo.ycsb.workloads.CoreWorkload
readallfields=true
readproportion=0
updateproportion=1
scanproportion=0
insertproportion=0
requestdistribution=zipfian
Thanks,
Ming
would find your table under the following directory:
$roodir/{namespace}/table
If you don't specify namespace at table creation time, 'default' namespace
would be used.
Cheers
On Sun, Nov 2, 2014 at 7:16 PM, Liu, Ming (HPIT-GADSC)
wrote:
> Hi, all,
>
> I have a program to
web site seems not updated. So can
anyone help to briefly introduce the new directory structure or give me a link?
It will be good to know what each directory is for.
Thanks,
Ming
and 0.94.24 as well.
Thank you all for the help.
Ming
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Thursday, October 16, 2014 10:29 PM
To: user@hbase.apache.org
Subject: Re: when will hbase create the zookeeper znode 'root-region-server’ is
created? Hbase 0.94
help me if you have any idea. Thanks very much in advance.
Thanks,
Ming
split in middle of key range? Or there exist other
algorithm here. Any help will be very appreciated!
Best Regards,
Ming
-Original Message-
From: john guthrie [mailto:graf...@gmail.com]
Sent: Wednesday, August 06, 2014 6:35 PM
To: user@hbase.apache.org
Subject: Re: Why hbase need manual split
ee a problem of auto split.
Is this true? Can HBase do split in other ways?
Thanks,
Ming
-Original Message-
From: john guthrie [mailto:graf...@gmail.com]
Sent: Wednesday, August 06, 2014 6:01 PM
To: user@hbase.apache.org
Subject: Re: Why hbase need manual split?
i had a customer with
.
Regards,
Ming
.
Ming
-Original Message-
From: Dhaval Makawana [mailto:dhaval.makaw...@gmail.com]
Sent: Sunday, August 28, 2011 2:06 AM
To: user@hbase.apache.org
Subject: Number of map jobs per region
Hi,
We have 31 regions for a table in our HBase system and hence while scanning
the table via
It looks like there is a HBase API called checkAndPut. By setting the value to
be "null", you can achieve "put only when the row+column family+column
qualifier doesn't exist". Nice feature.
_____
From: Ma, Ming
Sent: Wednesday,
lock()
// let us do checking again in case another instance has just inserted the same
row
If (!Row.Get())
{
// the row doesn't exist
Row.Put();
}
Zookeeper.unlock()
}
Any suggestions?
Thanks.
Ming
Hi,
Where can I find the targeted release date of 0.92.0?
Thanks.
Ming
60 matches
Mail list logo