You are right.
The software should not get a free pass only because the hardware might lose
data, though. :)
From: Otis Gospodnetic
To: user@hbase.apache.org; lars hofhansl
Sent: Monday, January 14, 2013 7:19 PM
Subject: Re: persistence in Hbase
Hi,
Just
This is a broad topic by itself. In short often people use battery backed
cache or leave the write cache disabled for such concern. There are various
factors involved when deciding if to leave caches enabled or not. Caches
are often good for OLTP type application or even light OLAP workload. But
fo
Hi,
Just for my own edification - isn't data loss always going to be possible
due to caches present in HDDs and inability(?) for force them to flush. I
believe I've read even fsync lies...
Thanks,
Otis
--
HBASE Performance Monitoring - http://sematext.com/spm/index.html
On Thu, Jan 10, 2013
Hi, mohandes.zebeleh
you can adjust parameter as below( Major Compaction, Minor Compaction,
Split):
if you do not set, it will retain default value(1).
hbase.regionserver.thread.compaction.large
5
hbase.regionserver.thread.compaction.small
10
hbase.regionserver.thread.split
5
Re
Hi hbase users,
I am running HBase (on top of HDFS) in
distributed mode (on 8 VMs), and things like JPS look fine on all the
machines in the cluster. I am also able to run hbase shell and
interact with HBase though it. But when I want to benchmark my HBase
cluster with YCSB (Yahoo! Cloud System
restart Hbase cluster?
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary=0015175cb4cee635fe04d348eb19
--0015175cb4cee635fe04d348eb19
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Is that one unassigned task even getting assigned or it errored out and
won
I restarted Hbase master, and its taking long time (at least 30 minutes) to
come back.
In master-status page I am seeing over 400 regions in transition.
In the hbase master log I am seeing following:
DEBUG org.apache.hadoop.hbase.master.SplitLogManager: total tasks = 1
unassigned = 1
(This is
This in enforced in the serverside scanner framework (ScanQueryMatcher called
by StoreScanner).
So while expired KeyValues are only physically only removed once a compaction
runs, they are logically hidden by the scanner framework.
In fact the same scanner framework is used to decide whether KeyV
What is the root directory location of hbase
Sent from Samsung Galaxy NoteIbrahim Yakti wrote:I'm using
cdh4 on ec2
HBase 0.92
Sqoop 1.4.2
I'll double check versions tomorrow.
when I reboot all the tables are deleted, I'll check the default location
tomorrow as well.
What about the weird
I'm using cdh4 on ec2
HBase 0.92
Sqoop 1.4.2
I'll double check versions tomorrow.
when I reboot all the tables are deleted, I'll check the default location
tomorrow as well.
What about the weird count issue?
Thanks,
Ibrahim
Sent from another galaxy device.
On Jan 14, 2013 8:22 PM, "Stack" wr
Hi Prabhjot,
Right now, your only choices for HBase directly from a .NET client are the
Thrift or REST gateways. You can find documentation about those interfaces
here [0] and here [1], respectively.
Best of luck,
-n
[0]: http://hbase.apache.org/book/thrift.html
[1]: http://wiki.apache.org/hadoo
I have a follow up question here:
A column family can be defined to have a maximum number of versions per
column qualifier value. Is this enforced only by the client side code
(HTable) or also by the InternalScanner implementations?
On Monday, January 14, 2013, S Ahmed wrote:
> Thanks Lars!
>
> S
Thanks Lars!
Sort of a side question after following your proposed patch:
https://issues.apache.org/jira/secure/attachment/12511771/5268-v5.txt
Locally on your computer (laptop?), can those tests run in isolation or you
need a fairly complicated setup to run them? (all the various hbase
dependanc
On Mon, Jan 14, 2013 at 6:12 AM, Ibrahim Yakti wrote:
> Hello,
>
> I have a weird issue, I am using sqoop to import data from MySQL into
> HBase, sqoop confirms that 2.5 million records were imported, when I do
> count "table_name" in HBase shell it returns numbers like:
>
> 260970 row(s) in 20.4
Hi Ibrahim,
Thanks for reporting the issue you are seeing. Would you be able to provide
a little more information about the version of HBase and Sqoop that you are
using?
Also, have you checked in HDFS to see If your data is there after reboot?
-Aleks S.
On Monday, January 14, 2013, Ibrahim Y
Hello,
I have a weird issue, I am using sqoop to import data from MySQL into
HBase, sqoop confirms that 2.5 million records were imported, when I do
count "table_name" in HBase shell it returns numbers like:
260970 row(s) in 20.4740 seconds
>
(I have used sqoop to import same data from mysql to
Hi
For feedback.
With a lot of profiling works, I guess I found the most promise cause of my
problem.
It's not because one disk is slow or something ( though I do have slow disk on
different region servers, but the lagging behind pattern seems not related to
the disk slowness pattern)
It see
17 matches
Mail list logo