The exception was throwed by :
SecureBulkLoadEndpoint
-- cleanupBulkLoad,
However I checked the /tmp/hbase-staging dir, all the dirs were empty !
and I also wonder to know why we create the staging dir when delete it
public void cleanupBulkLoad(RpcController controller,
Cleanup
Without cleaning staging directory, temp files would pile up on your hdfs,
right ?
Cheers
On Mon, Jun 9, 2014 at 8:25 PM, jhaobull wrote:
> thanks!
>
>
> For this method only does some cleanup work, and I found that the data has
> been imported to hbase successfully, and now I just ignore it,
thanks!
For this method only does some cleanup work, and I found that the data has been
imported to hbase successfully, and now I just ignore it, does it ok?
原始邮件
发件人:Ted yuyuzhih...@gmail.com
收件人:user@hbase.apache.orgu...@hbase.apache.org
抄送:useru...@hbase.apache.org
发送时间:2014年6月10日(周二) 10:
What user did you use for the secure bulk load ?
See the steps outlined in the javadoc of SecureBulkLoadEndpoint (first two
steps copied below):
* 1. Create an hbase owned staging directory which is
* world traversable (711): /hbase/staging
* 2. A user writes out data to his secure output dir
To my understanding, 0.98.3 would be the next stable release.
Supporting 0.98 in Haeinsa would give more users a chance to try it out.
Cheers
On Mon, Jun 9, 2014 at 7:34 PM, James Lee wrote:
> Ted Yu:
> Thank you for the notice.
>
> I know there are major changes on HBase client interface in
Ted Yu:
Thank you for the notice.
I know there are major changes on HBase client interface in 0.98.
I'm sure that supporting HBase 0.98 in Haeinsa will be sooner than
upgrading of our cluster.
There is no plan to upgrade our HBase cluster to 0.98 for now,
but supporting HBase 0.98 in Haeinsa is th
hi, everyone :
Our hbase version is 0.96.1.1-hadoop2 ,and we use kerberos for security.
but when I use bulkload . that code runs to the location
LoadIncrementalHFiles(line 307):
new SecureBulkLoadClient(table).cleanupBulkLoad(bulkToken);
throws exception as below:
Does anyon
There have been some fixes for secure bulk load.
The latest of which is HBASE-11311.
Are you able to try out, say 0.98.3 which was released today ?
Cheers
On Jun 9, 2014, at 6:40 PM, bull fx wrote:
> hi, everyone :
>
> Our hbase version is 0.96.1.1-hadoop2 ,and we use kerberos for security
hi, everyone :
Our hbase version is 0.96.1.1-hadoop2 ,and we use kerberos for security.
but when I use bulkload . that code runs to the location
LoadIncrementalHFiles(line 307):
new SecureBulkLoadClient(table).cleanupBulkLoad(bulkToken);
throws exception as below:
Does anyone have
Apache HBase 0.98.3 is now available for download. Get it at your nearest
Apache mirror [1] or Maven repository.
The list of changes in this release can be found in the release notes [2]
or following this announcement.
Thanks to all who contributed to this release.
Best,
The HBase Dev Team
1.
James:
Nice to hear from you.
I refreshed local workspace for haeinsa. In pom.xml, I see hbase version
of 0.94.3.
Latest 0.94 release was 0.94.20
0.98.3 was released this past weekend.
Do you have plan to bring 0.98 support to haeinsa ?
Cheers
On Mon, Jun 9, 2014 at 9:49 AM, James Lee wrote:
I think there is also https://github.com/yahoo/omid . Not sure of the
status of any of those have never tried them neither...
2014-06-09 12:49 GMT-04:00 James Lee :
> I'm using HBase as OLTP for about 3 years.
> One of biggest pain point of using HBase as OLTP was lack of multi-row
> transaction
I'm using HBase as OLTP for about 3 years.
One of biggest pain point of using HBase as OLTP was lack of multi-row
transaction.
For this reason, we implemented multi-row transaction on top of HBase.
It was designed to support OLTP (low latency and so on).
https://github.com/vcnc/haeinsa/
There is
Never mind. I removed the synchronized block from the code.
Thanks to every one!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Discuss-HBase-with-multiple-threads-tp4060081p4060235.html
Sent from the HBase User mailing list archive at Nabble.com.
You can remove the synchronized block - connection is per thread, you don't
need synchronized block.
On Mon, Jun 9, 2014 at 8:04 AM, Hotec04 wrote:
> I run it without the block and seems it is still working. Will this be an
> issue if the code go with the synchronized block other than latency?
In the last HBaseCon and in the HBaseCon 2013, there are several
use-cases for this combination.
http://hbasecon.com
You could search at Slideshare.net in the Cloudera's profile using the
hbasecon tag all presentations related to the conference.
One of my favorite sessions in the HBaseCon 2013 was
I run it without the block and seems it is still working. Will this be an
issue if the code go with the synchronized block other than latency? Do I
have to remove it?
Thank you!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Discuss-HBase-with-multiple-threads-tp4060
You're missing too many things.
Why do you want to use the wrong tool for the job.
OLTP is a different beast from what HBase has to offer.
On Jun 9, 2014, at 7:10 AM, N. Ramasubramanian
wrote:
> Hi,
>
> Thanks for the link… but in messaging there will not be any update.. but in
> the rea
Hi,
Thanks for the link… but in messaging there will not be any update.. but in the
real time OLTP there will be huge updates… any suggestions how to handle this..
regards,
Rams
On 09-Jun-2014, at 2:33 pm, 谢良 wrote:
> borthakur.com/ftp/RealtimeHadoopSigmod2011.pdf
>
> Thanks,
> _
Decode rowkey has timestamp.
KeyValue has timestamp field.
Do these two timestamps carry the same value ?
Cheers
On Jun 9, 2014, at 2:02 AM, Guillermo Ortiz wrote:
> Hi,
>
> I'm generating key with SHA1, as it's a hex representation after generating
> the keys, I use Hex.decode to save memo
Hi Guillermo,
Any chance to share your code to see where this can come from?
JM
2014-06-09 5:02 GMT-04:00 Guillermo Ortiz :
> Hi,
>
> I'm generating key with SHA1, as it's a hex representation after generating
> the keys, I use Hex.decode to save memory since I could store them in half
> space
borthakur.com/ftp/RealtimeHadoopSigmod2011.pdf
Thanks,
发件人: Ramasubramanian Narayanan [ramasubramanian.naraya...@gmail.com]
发送时间: 2014年6月9日 13:27
收件人: u...@hive.apache.org; user@hbase.apache.org
主题: White papers/Solution implemented to use HIVE/HBASE as OLT
Hi,
I'm generating key with SHA1, as it's a hex representation after generating
the keys, I use Hex.decode to save memory since I could store them in half
space.
I have a MapReduce process which deletes some of these keys, the problem
it's that it's that when I try to delete them, but I don't get
Hi,
I want to discuss about the properties of setting maximum
number of client connections in HBase.
Need to discuss about the pros and cons of implementing such a feature in HBase
similar to various other database to avoid overloading.
Please share your opinions.
Regards
KASH
If I use external zookeeper it's ok.
1. I modified hbase-env.sh and add
export HBASE_MANAGES_ZK=false
2. start zookeeper in standalone mode
3. start-hbase.sh
what's the problem?
On Mon, Jun 9, 2014 at 2:35 PM, Li Li wrote:
> letting start-hbase.sh do it for me
>
> On Mon, Jun 9, 2014 at 2:05
25 matches
Mail list logo