ikely to hit a wider
audience in user@phoenix.
Daniel Wong
On 2021/08/22 10:34:28, "Gupta, Atul" wrote:
> Hi HBASE PMC Members.
>
> Greeting!
>
> We are one of the active users of HBASE and Phoenix. There are number of
> HBASE RT/Batch use cases are running in Lo
Looks like https://issues.apache.org/jira/browse/HBASE-19215 to me.
Passing -Djdk.nio.maxCachedBufferSize=262144 to the client JVM might
prevent this from happening.
pt., 11 sty 2019 o 22:04 Buchi Reddy Busi Reddy
napisał(a):
>
> Hi,
>
> In our production, we have been seeing this strange issue
Hi,
I want to configure HBase in DR scenario. I create two separete clusters
and configure master-master replication. At the begining I use only PROD
cluster and in case of disaster of data center I want to switch client to
DR cluster.
Is client able to switch automatically from PROD to DR cluster
Hi,
fsync mode on WAL is not supported currenlty. So theoritically it is
possible that in case of power failure the data is lost because DataNodes
didn't flush it to physical disk.
I know that probability is small and data center should have additional
power source, but still it is possible. How d
Hi,
I was surprised by one HBase behaviour. I was modyfing one cell in one row
and then I was extracting this row with my custom filter. Timestamp for all
modifications in my database is currently 0. Suprising for me was that
filter was provided with the previouos versions of a cell, praboboly
bec
2017-11-07 18:22 GMT+01:00 Stack :
> On Mon, Nov 6, 2017 at 6:33 AM, Daniel Jeliński
> wrote:
>
> > For others that run into similar issue, it turned out that the
> > OutOfMemoryError was thrown (and subsequently hidden) on the client side.
> > The error was caused
, and setting
-Djdk.nio.maxCachedBufferSize=262144
allowed the application to complete.
Yet another proof that correct handling of OOME is hard.
Thanks,
Daniel
2017-10-11 11:33 GMT+02:00 Daniel Jeliński :
> Thanks for the hints. I'll see if we can explicitly set
> MaxDirectMemorySize
Thanks for the hints. I'll see if we can explicitly set MaxDirectMemorySize
to a safe number.
Thanks,
Daniel
2017-10-10 21:10 GMT+02:00 Esteban Gutierrez :
> http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/tip/src/
> share/classes/sun/misc/VM.java#l184
>
> // The initial value
Vladimir,
XXMaxDirectMemorySize is set to the default 0, which means unlimited as far
as I can tell.
Thanks,
Daniel
2017-10-09 19:30 GMT+02:00 Vladimir Rodionov :
> Have you try to increase direct memory size for server process?
> -XXMaxDirectMemorySize=?
>
> On Mon, Oct 9, 201
Ted,
Apparently it does not; BoundedByteBufferPool does not contain any
references to allocateDirect.
Thanks,
Daniel
2017-10-09 19:41 GMT+02:00 Ted Yu :
> Daniel:
> Does the version you use contain HBASE-13819 ?
>
> Cheers
>
> On Mon, Oct 9, 2017 at 2:12 AM, Daniel
Sudhir,
Not on the cluster in question. It's still configured to use HFile v2, and
as far as I can tell, this is not a setting you could override on a table
level.
Thanks,
Daniel
2017-10-10 0:16 GMT+02:00 sudhir patil :
> If your cell size is greater than 100KB its recommended to u
Cluster is
running HBase 1.2.0-cdh5.10.2.
Is this a known problem? Are there workarounds available?
Thanks,
Daniel
,
Daniel
2017-08-30 14:21 GMT+02:00 deepaksharma25 :
> Hello,
> I am new to HBase DB and currently evaluating it for one of the requirement
> we have from Customer.
> We are going to write TBs of data in HBase daily and we need to fetch
> specifc data based on filter.
>
> I ca
Filed HBASE-18381 <https://issues.apache.org/jira/browse/HBASE-18381> for
this.
2017-07-14 14:34 GMT+02:00 Daniel Jeliński :
> Hi Ted,
> Thanks for looking into this. I'm not an admin of this cluster, so I
> probably won't be able to help with testing.
>
> Just to
so crashes; every crash leaves a 64MB temp file, which adds
up quickly, since the region servers are restarted automatically.
I'll put that in a JIRA.
Regards,
Daniel
2017-07-14 14:26 GMT+02:00 Ted Yu :
> I put up a quick test (need to find better place) exercising the snippet
>
onServer service is aborted.
I wasn't able to reproduce this issue with MOB disabled.
Regards,
Daniel
table was not compacted prior to the test, so data locality
may play a role in the observed results.
Regards,
Daniel
2017-04-04 21:44 GMT+02:00 Mikhail Antonov :
> Unfamiliar with MOB codebase but reading.. " It takes 100 ms
> to retrieve a 1MB cell (file), and only after retrieving I
Hi Vlad,
That looks a lot like what MOBs do today. While it could work, it seems
overly complicated compared to implementing a custom hbase client for what
HBase offers already.
Thanks,
Daniel
2017-03-31 19:25 GMT+02:00 Vladimir Rodionov :
> Use HBase as a file system meta storage (index), k
sons other than the API that justify the 10MB limit on
MOBs?
Thanks,
Daniel
2017-03-31 0:03 GMT+02:00 Ted Yu :
> Have you read:
> http://hbase.apache.org/book.html#hbase_mob
>
> In particular:
>
> When using MOBs, ideally your objects will be between 100KB and 10MB
>
> Cheer
as possible.
What's the recommended approach to avoid or reduce the delay between when
HBase starts sending the response and when the application can act on it?
Thanks,
Daniel
thx, this is what I needed
2017-03-02 11:07 GMT+01:00 Ted Yu :
> Daniel:
> If you don't pass your ExecutorService to HTable ctor, the following would
> be called:
>
> public static ThreadPoolExecutor getDefaultExecutor(Configuration conf)
> {
>
>
sorService? This API should run coprocessors in
parallel for different regions.
2017-03-02 14:32 GMT+08:00 Daniel Połaczański :
> I invoke my business logic which is similar to map reduce paradigm.
> Everything works. Only performance is problem that regions are not
> processed parallely.
&
ał(a):
> To my knowledge, there is no support for this type of combination of map
> reduce and coprocessor.
>
> On Wed, Mar 1, 2017 at 2:55 PM, Daniel Połaczański >
> wrote:
>
> > It is something like map reduce processing.
> > I want to run map and combine ph
you describe your use case in more detail ?
>
> What type of custom coprocessor are you loading to the region server ?
>
> Thanks
>
> On Wed, Mar 1, 2017 at 2:24 PM, Daniel Połaczański >
> wrote:
>
> > Hi,
> > Let's assume that we have cluster consisting fr
Hi,
Let's assume that we have cluster consisting from one RegionServer and the
RegionsServer contains one table consisting from 3 regions.
I would like to process regions in coprocessor parallely. Is it possible?
I observed that currenlty it invokes coproprocessor with all the regions
one by one
hi,
I don't use any enconding and compression.
version 1.2.3
2017-01-27 0:11 GMT+01:00 Ted Yu :
> Daniel:
> For the underlying column family, do you use any data block encoding /
> compression ?
>
> Which hbase release do you use ?
>
> Thanks
>
> On Thu, Jan 26
Hi,
in the work we were testing the following scenarios regarding scan
performance. We stored 2500 domain rows containing 20 attributes.And after
that read one random row with all attributes couple times
Scenario A
every single attribute stored in dedicated column. one hbase row with 20
columns.
-hbase/
Thanks!
Dan
[image: --]
Daniel Vimont
[image: https://]about.me/dvimont
<https://about.me/dvimont?promo=email_sig&utm_source=email_sig&utm_medium=external_link&utm_campaign=chrome_ext>
, and then provide feedback via the single-item survey[3].
-
[1] https://drive.google.com/open?id=0B0skoeyva4KiV3Y3WmN3M3BuTE0
[2] http://bit.ly/cm-videos
[3] http://bit.ly/cm-surveymonkey
Thanks!
Dan
[image: --]
Daniel Vimont
[image: https://]about.me/dvimont
<https://about.me/dvimont?pr
on": https://youtu.be/Pc7RoiefTlw
[5] Javadocs: http://bit.ly/ColumnManagerJavadocs
[6] Developer Showcase: http://conferences.oreilly.
com/strata/hadoop-big-data-ny/public/schedule/detail/54247
[image: --]
Daniel Vimont
[image: https://]about.me/dvimont
<https://about.me/dvimont?promo=email_sig&utm_source=email_sig&utm_medium=external_link&utm_campaign=chrome_ext>
ColumnManager
[4] INSTALL/CONFIGURE video: https://youtu.be/aMnQouccmYY
[5] INTRODUCTION video: https://youtu.be/Pc7RoiefTlw
[image: --]
Daniel Vimont
[image: https://]about.me/dvimont
<https://about.me/dvimont?promo=email_sig&utm_source=email_sig&utm_medium=external_link&utm_campaign=chrome_ext>
hub: http://bit.ly/ColumnManager
[3] YouTube: https://youtu.be/Pc7RoiefTlw
[4] Javadocs: http://bit.ly/ColumnManagerJavadocs
[image: --]
Daniel Vimont
[image: https://]about.me/dvimont
<https://about.me/dvimont?promo=email_sig&utm_source=email_sig&utm_medium=external_link&utm_campaign=chrome_ext>
Cheyenne,
Looks like you're attempting to do what Xiaochun was also recently tasked
with doing: install HBase in a Windows environment using Cygwin[1].
I shared with him the frustrations I experienced when attempting to do this
over a year ago[2], but forwarded him the "raw" notes that I took as
e on Windows with
> Cygwin. Thanks a lot for your suggestions. Looks like I still need to setup
> HBase with Cygwin. Could you please share the problems you met?
>
> Regards,
> xiaochun
>
> -Original Message-
> From: Daniel Vimont [mailto:dan...@commonvox.org]
>
FWIW, I tried quite some time ago to set up HBase on Windows with CygWin
(just to see if I could do it), and I ran into so many problems that I
simply quit trying after a few hours.
However, I DO use a Windows box to do all of my HBase-contributor work on
(and I do it in the way that Mike has alre
The beta-02 release of ColumnManager for HBase is now available on GitHub
and via the Maven Central Repository.
>From the top of the README:
==
ColumnManagerAPI for HBase™ is an extended METADATA REPOSITORY SYSTEM for
HBase with options for:
COLUMN AUDITING -- captures Column meta
unfortunately no
Regards
2016-03-25 21:13 GMT+01:00 Ted Yu :
> bq. calculating another new attributes of a trade
>
> Can you put the new attributes in separate columns ?
>
> Cheers
>
> On Fri, Mar 25, 2016 at 12:38 PM, Daniel Połaczański <
> dpolaczan...@gmail.com
&
lains the frequent split :-)
>
> Is the original data needed after post-processing (maybe for auditing) ?
>
> Cheers
>
> On Fri, Mar 25, 2016 at 10:32 AM, Daniel Połaczański <
> dpolaczan...@gmail.com
> > wrote:
>
> > I am testing different solutions (POC).
> >
ite is reduced ?
>
> Thanks
>
> On Fri, Mar 25, 2016 at 8:55 AM, Daniel Połaczański <
> dpolaczan...@gmail.com>
> wrote:
>
> > Hi,
> > I have some processing in my coprocesserService which modifies the
> existing
> > data in place. It iterates over ever
Hi,
I have some processing in my coprocesserService which modifies the existing
data in place. It iterates over every row, modifies and puts it back to
region. The table can be modified by only one client.
During the processing size of the data gets increased -> region's size get
increased -> regi
orker threads" and "max queued
requests", but cannot find them in the documentation.
Thanks for any hint.
Daniel
Hi, my Thrift API server seems to allow at most 16 concurrent connections. Is
there a way to raise this limit? I cannot find a clue in the documentation.
Thanks a lot.
Daniel
he only performance impact I can think of around
> this change would be major compaction of the table, but even that shouldn't
> be an issue.
>
>
> _
> From: Daniel
> Sent: Sunday, February 21, 2016 9:22 AM
> Subject: Two questions about th
maximum number of versions but insert only
a few versions? Does it waste space?
(2) How much performance overhead does it cause to increase the maximum number
of versions of a column family after enormous (e.g. billions) rows have been
inserted?
Regards,
Daniel
Serega,
I agree with Anil Gupta that direct use of the Java API should prove much
more straightforward than indirectly invoking the HBase shell from within
Java.
If you need a brief "gist" example of how to use the Java API for HBase,
you can find one here:
https://gist.github.com/dvimont/a7791f6
I am a relative newcomer to the HBase world, and have been working for a
couple of months to acquire experience in working with the HBase API.
Part of my hands-on learning process has been devoted to the building of a
set of metadata-management tools (building upon my past experiences with
other D
Will need to change ivy/libraries.properties, specify the right hbase
version and compile again.
On Wed, Nov 4, 2015 at 6:31 AM, Ted Yu wrote:
> ... 22 moreCaused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.client.Scan.setCacheBlocks(Z)Vat
>
> Looks like the version of Pig you
Ted Yu, thank you very much! I upgraded Hbase and now works properly.
On Tue, Aug 4, 2015 at 6:03 PM, Ted Yu wrote:
> Daniel:
> Looks like you're hitting:
> HBASE-13935 Orphaned namespace table ZK node should not prevent master to
> start
>
> Please run HBase 1.1.1 which
n Tue, Aug 4, 2015 at 11:55 AM, Ted Yu wrote:
>
> Your hbase-site.xml is effectively empty.
>
> Have you followed this guide ?
> http://hbase.apache.org/book.html#quickstart
> <http://hbase.apache.org/book.html#quickstart>>
>
> Cheers
>
> On Tue, Aug
I research on the internet but I couldn't find anything to help me. I don'
t know what to do anymore.
Thank you!
--
-dom
--
Daniel de Oliveira Mantovani
Business Analytic Specialist
Perl Evangelist /Astrophysics hobbyist.
+55 11 9 8538-9897
XOXO
On Tue, May 13, 2014 at 9:58 AM, Liam Slusser wrote:
> You can also create a table via the hbase shell with pre-split tables like
> this...
>
> Here is a 32-byte split into 16 different regions, using base16 (ie a md5
> hash) for the key-type.
>
> create 't1', {NAME => 'f1'},
> {SPLITS=> ['10
Are you using HFileOutputFormat.configureIncrementalLoad() to set up the
partitioner and the reducers? That will take care of ordering your keys.
J-D
On Thu, May 1, 2014 at 5:38 AM, Guillermo Ortiz wrote:
> I have been looking at the code in HBase, but, I don't really understand
> what this err
On Tue, Apr 15, 2014 at 12:17 AM, Hansi Klose wrote:
> Hi Jean-Daniel,
>
> thank you for your answer and bring some light into the darkness.
>
You're welcome!
>
> > You can see the bad rows listed in the user logs for your MR job.
>
> What log do you mean. The o
Yeah you should use endtime, it was fixed as part of
https://issues.apache.org/jira/browse/HBASE-10395.
You can see the bad rows listed in the user logs for your MR job.
J-D
On Mon, Apr 14, 2014 at 3:06 AM, Hansi Klose wrote:
> Hi,
>
> I wrote a little script which should control the running
It's a simple leader election via ZooKeeper.
J-D
On Tue, Apr 8, 2014 at 7:18 AM, gortiz wrote:
> Could someone explain me which it's the process to select the next HMaster
> when the current one is gone down?? I've been looking for information about
> it in the documentation, but, I haven't fo
On Mon, Mar 17, 2014 at 6:01 AM, Linlin Du wrote:
> Hi all,
>
> First question:
> According to documentation, hfile.block.cache.size is by default 40
> percentage of maximum heap (-Xmx setting). If -Xmx is not used and only
> -Xms is used, what will it be in this case?
>
> Second question:
> This
Resurrecting this old thread. The following error:
"java.lang.RuntimeException: Failed suppression of fs shutdown hook"
Is caused when HBase is compiled against Hadoop 1 and has Hadoop 2 jars on
its classpath. Someone on IRC just had the same issue and I was able to
repro after seeing the classpa
IIRC it used to be an issue if the folder was already existing, even if
empty. It's not the case anymore.
J-D
On Fri, Feb 7, 2014 at 3:38 PM, Jay Vyas wrote:
> Hi hbase.
>
> In normal installations, Im wondering who should create hbase root.dir.
>
> 1) I have seen pseudo-distributed mode docs
That's right, round robin should only be applied when you start answering
some client request and stick to it until you're done.
J-D
On Fri, Dec 6, 2013 at 9:17 PM, Varun Sharma wrote:
> Hi everyone,
>
> I have a question about the hbase thrift server and running scans in
> particular. The thr
The problem with having a bunch of master racing is that it's not evident
for the operator who won, so specifying --backup to all but one master
ensures that you always easily know where the master is.
Relevant code from HMaster.java:
// If we're a backup master, stall until a primary to writ
Asaf Mesika wrote:
> Can you please explain why is this suspicious?
>
> On Monday, October 7, 2013, Jean-Daniel Cryans wrote:
>
> > This line:
> >
> > [CMS-concurrent-mark: 12.929/88.767 secs] [Times: user=14.30 sys=3.74,
> > real=88.77
> > secs]
>
hutdownHook(ShutdownHook.java:196)
> at
>
> org.apache.hadoop.hbase.regionserver.ShutdownHook.install(ShutdownHook.java:83)
> at
>
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:191)
> at
>
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(Loc
What's happening before this stack trace in the log?
J-D
On Fri, Oct 25, 2013 at 6:10 AM, Salih Kardan wrote:
> Hi all
>
> I am getting the error below while starting hbase (hbase 0.94.11). I guess
> since hbase cannot
> connect to hadoop, I get this error.
>
> *java.lang.RuntimeException: Fai
On Wed, Oct 9, 2013 at 10:59 AM, Vladimir Rodionov
wrote:
> I can't say for SCR. There is a possibility that the feature is broken, of
> course.
> But the fact that hbase.regionserver.checksum.verify does not affect
> performance means that OS caches
> effectively HDFS checksum files.
>
See "OS c
On Tue, Oct 8, 2013 at 7:09 AM, prakash kadel >wrote:
>
> > thanks,
> >
> > yup, it seems so. I have 48 gb memory. i see it swaps at that point.
> >
> > btw, why is the CMS not kicking in early? do you have any idea?
> >
> > sincerely
> &g
While we're on the topic of upcoming meetups, there's also a meetup at
Facebook's NYC office the week of Strata/Hadoop World (10/28). There's
still room for about 50 attendees.
http://www.meetup.com/HBase-NYC/events/135434632/
J-D
On Mon, Oct 7, 2013 at 2:10 PM, Enis Söztutar wrote:
> Hi guys
This line:
[CMS-concurrent-mark: 12.929/88.767 secs] [Times: user=14.30 sys=3.74,
real=88.77
secs]
Is suspicious. Are you swapping?
J-D
On Mon, Oct 7, 2013 at 8:34 AM, prakash kadel wrote:
> Also,
>why is the CMS not kicking in early, i have set XX:+
> UseCMSInitiatingOccupancyOnly???
>
>
hbase.master was removed when we added zookeeper, so now a client will do a
lookup in ZK instead of talking to a pre-determined master. So in a
way, hbase.zookeeper.quorum is what replaces hbase.master
FWIW that was done in 0.20.0 which was released in September of 2009, so
hbase.master has been r
I like the way you were able to dig down into multiple logs and present us
the information, but it looks more like GC than an HDFS failure. In your
region server log, go back to the first FATAL and see if it got a session
expired from ZK and other messages like a client not being able to talk to
a
00 rows..
>
>
> On Fri, Sep 27, 2013 at 11:12 PM, Jean-Daniel Cryans >wrote:
>
> > Your details are missing important bits like you configurations,
> > Hadoop/HBase versions, etc.
> >
> > Doing those random reads inside your MR job, especially if they are
> r
Your details are missing important bits like you configurations,
Hadoop/HBase versions, etc.
Doing those random reads inside your MR job, especially if they are reading
cold data, will indeed make it slower. Just to get an idea, if you skip
doing the Gets, how fast does it became?
J-D
On Fri, S
That means that the master cluster isn't able to see any region servers in
the slave cluster... is cluster b up? Can you create tables?
J-D
On Fri, Sep 27, 2013 at 3:23 AM, Arnaud Lamy wrote:
> Hi,
>
> I tried to configure a replication with 2 boxes (a&b). A hosts hbase & zk
> and b only hbase
You'd need to use 0.94 (or CDH4.2+ since you are mentioning being on CDH)
to have access to TableInputFormat.SCAN_ROW_START and SCAN_ROW_STOP then
all you need to do is to copy Export's code and add what you're missing.
J-D
On Tue, Sep 24, 2013 at 5:42 PM, karunakar wrote:
> Hi Experts,
>
> I
On flushing we do some cleanup, like removing deleted data that was already
in the MemStore or extra versions. Could it be that you are overwriting
recently written data?
48MB is the size of the Memstore that accumulated while the flushing
happened.
J-D
On Tue, Sep 24, 2013 at 3:50 AM, aiyoh79
On Mon, Sep 23, 2013 at 9:14 AM, John Foxinhead wrote:
> Hi all. I'm doing a project for my university so that i have to know
> perfectly how all the Hbase ports work. Studing the documentation i found
> that Zookeeper accept connection on port 2181, Hbase master on port 6
> and Hbase regionse
You need to create the table with pre-splits, see
http://hbase.apache.org/book.html#perf.writing
J-D
On Thu, Sep 19, 2013 at 9:52 AM, Dolan Antenucci wrote:
> I have about 1 billion values I am trying to load into a new HBase table
> (with just one column and column family), but am running into
Could happen if a region moves since locks aren't persisted, but if I were
you I'd ask on the opentsdb mailing list first.
J-D
On Thu, Sep 19, 2013 at 10:09 AM, Tianying Chang wrote:
> Hi,
>
> I have a customer who use openTSDB. Recently we found that only less than
> 10% data are written, res
(putting cdh user in BCC, please don't cross-post)
The web UIs for both the master and the region server have a section called
Tasks and has a bunch of links like this:
Tasks
Show All Monitored Tasks Show non-RPC Tasks Show All RPC Handler Tasks Show
Active RPC Calls Show Client Operations View
Ah I see, well unless you setup "Secure HBase" there won't be any perms
enforcement.
So in which way is your application failing to use "Selector"? Do you have
an error message or stack trace handy?
J-D
On Tue, Sep 17, 2013 at 5:43 AM, BG wrote:
> Well we are trying to find out why our applic
You can always remove the NOT clause by changing the statement, but I'm
wondering what your use case really is. HBase doesn't have secondary
indexes so, unless you are doing a short-ish scan (let's say a million
rows), it means you want to do a full table scan and that doesn't scale.
J-D
On Tue,
What are you trying to do bg? If you want to setup user permissions you
also need to have a "secure" HBase (the link that Ted posted) which
involves Kerberos.
J-D
On Mon, Sep 16, 2013 at 1:33 PM, Ted Yu wrote:
> See http://hbase.apache.org/book.html#d0e5135
>
>
> On Mon, Sep 16, 2013 at 1:06 P
HBASE-8753 doesn't seem related.
Right now there's nothing in the shell that does the equivalent of this:
Delete.deleteFamily(byte [] family)
But it's possible to run java code in the jruby shell so in the end you can
still do it, just takes more lines.
J-D
On Mon, Sep 16, 2013 at 1:45 AM, Te
Release date is: when it gets released. We are currently going through
release candidates and as soon as one gets accepted we release it. I'd like
to say it's gonna happen this month but who knows.
There's probably one or two presentations online that explain what's in
0.96.0, but the source of tr
Or roll back to CDH 4.2's HBase. They are fully compatible.
J-D
On Thu, Sep 12, 2013 at 10:25 AM, lars hofhansl wrote:
> Not that I am aware of. Reduce the HFile block size will lessen this
> problem (but then cause other issues).
>
> It's just a fix to the RegexStringFilter. You can just reco
Yeah there isn't a whole lot of documentation about metrics. Could it be
that you are still running on a default 1GB heap and you are pounding it
with multiple clients? Try raising the heap size?
FWIW I gave a presentation at HBaseCon with Kevin O'dell about HBase
operations which could shed some
Scan.setBatch does what you are looking for, since with a Get there's no
way to iterate over mutliple calls:
https://github.com/apache/hbase/blob/0.94.2/src/main/java/org/apache/hadoop/hbase/client/Scan.java#L306
Just make sure to make the Scan start at the row you want and stop right
after it.
J
What's your /etc/hosts on the master like? HBase does a simple lookup to
get the machine's hostname and it "seems" your need reports itself as being
localhost.
On Tue, Sep 3, 2013 at 6:23 AM, Omkar Joshi wrote:
> I'm trying to set up a 2-node HBase cluster in distributed mode.
>
> Somehow, my re
You probably put a string in there that was a number, and increment expects
a 8 bytes long. For example, if you did:
put 't1', '9row27', 'columnar:column1', '1'
Then did an increment on that, it would fail.
J-D
On Thu, Aug 29, 2013 at 4:42 AM, yeshwanth kumar wrote:
> i am newbie to Hbase,
>
Region servers replicate data written to them, so look at how your regions
are distributed.
J-D
On Tue, Aug 27, 2013 at 11:29 AM, Demai Ni wrote:
> hi, guys,
>
> I am using hbase 0.94.9. And setup replication from a 4-nodes master(3
> regserver) to a 3-nodes slave(2 regserver).
>
> I can tell
FYI you'll be in the same situation with 0.95.2, actually worse since it's
really just a developer preview release.
But if you meant "try" in its strict sense, ie use it on a test cluster,
then yes please do. The more people we get to try it out the better 0.96.0
will be.
J-D
On Thu, Aug 22, 20
On Mon, Aug 19, 2013 at 11:52 PM, Monish r wrote:
> Hi Jean,
>
s/Jean/Jean-Daniel ;)
> Thanks for the explanation.
>
> Just a clarification on the third answer,
>
> In our current cluster ( 0.90.6 ) , i find that irrespective of whether TTL
> is set or not , Maj
You can find a lot here: http://hbase.apache.org/replication.html
And how many logs you can queue is how much disk space you have :)
On Tue, Aug 20, 2013 at 7:23 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi,
>
> If I have a master -> slave replication, and master went down, re
Inline.
J-D
On Mon, Aug 19, 2013 at 2:48 AM, Monish r wrote:
> Hi guys,
> I have the following questions in HBASE 0.90.6
>
> 1. Does hbase use only one compaction thread to handle both major and minor
> compaction?
>
Yes, look at CompactSplitThread
>
> 2. If hbase uses multiple compaction t
gt;
> On Fri, Aug 2, 2013 at 12:07 PM, Jean-Daniel Cryans
> wrote:
>
>> Doing a bin/stop-hbase.sh is the way to go, then on the Hadoop side
>> you do stop-all.sh. I think your ordering is correct but I'm not sure
>> you are using the right commands.
>>
>>
ster, but not the Region
> Servers, then restarting HDFS. What's the correct order of operations for
> bouncing everything?
>
>
> On Thu, Aug 1, 2013 at 5:21 PM, Jean-Daniel Cryans wrote:
>
>> Can you follow the life of one of those blocks though the Namenode and
>>
> nothing special about data05, and it seems to be in the cluster, the same
> as anyone else.
>
>
> On Thu, Aug 1, 2013 at 5:04 PM, Jean-Daniel Cryans wrote:
>
>> I can't think of a way how your missing blocks would be related to
>> HBase replication, there's so
I can't think of a way how your missing blocks would be related to
HBase replication, there's something else going on. Are all the
datanodes checking back in?
J-D
On Thu, Aug 1, 2013 at 2:17 PM, Patrick Schless
wrote:
> I'm running:
> CDH4.1.2
> HBase 0.92.1
> Hadoop 2.0.0
>
> Is there an issue
"Unable to load realm info from SCDynamicStore" is only a warning and
a red herring.
What seems to be happening is that your shell can't reach zookeeper.
Are Zookeeper and HBase running? What other health checks have you
done?
J-D
On Tue, Jul 30, 2013 at 10:28 PM, Seth Edwards wrote:
> I am som
Can you tell who's doing it? You could enable IPC debug for a few secs
to see who's coming in with scans.
You could also try to disable pre-fetching, set hbase.client.prefetch.limit to 0
Also, is it even causing a problem or you're just worried it might
since it doesn't look "normal"?
J-D
On Mo
You could always set hbase.online.schema.update.enable to true on your
master, restart it (but not the cluster), and you could do what you
are describing... but it's a risky feature to use before 0.96.0.
Did you also set hbase.replication to true? If not, you'll have to do
it on the region servers
1 - 100 of 1670 matches
Mail list logo