, you're probably OK. I would definitely recommend you recompile
HBase if you're using it for a production system. You wouldn't want to
be chasing a fix for this if it manifests in a subtle/strange manner.
- Josh
[1] https://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html
On 12/20/21 11:01 AM
This is not indicative of an outright problem and can just be a result
of your data and the hardware which you are running HBase on.
Things to note from this data:
1. This RPC will return up to 1000 rows
2. The size of the data returned is not consistent (200KB for one, 18B
for the other)
3.
HBase in order to get HBase to work with Hadoop 3.1 or
higher?
On Mon, Oct 18, 2021 at 7:13 AM Josh Elser wrote:
Are the Hadoop JARs which you're using inside HBase the same as the
Hadoop version you're running? (e.g. in $HBASE_HOME/lib)
On 10/15/21 6:18 PM, Damillious Jones wrote:
Hi all, I
Are the Hadoop JARs which you're using inside HBase the same as the
Hadoop version you're running? (e.g. in $HBASE_HOME/lib)
On 10/15/21 6:18 PM, Damillious Jones wrote:
Hi all, I am seeing a similar issue which is noted in HBASE-26007 where
HBase will not start if dfs.encrypt.data.transfer in
No worries. Thanks for confirming!
On 10/10/21 1:43 PM, Simon Mottram wrote:
Hi
Thanks for the reply, I posted here by mistake and wasn't sure how to delete.
It's indeed a problem with phoenix
Sorry to waste your time
Cheers
S
From: Josh Elser
Sent
That error sounds like a bug in Phoenix.
Maybe you could try with a newer version of Phoenix? Asking over on
user@phoenix might net a better result.
On 9/27/21 11:47 PM, Simon Mottram wrote:
Forgot to mention this is only an issue for LAST_VALUE (so far!)
This works fine
SELECT
+1 for following up in Phoenix for Phoenix-specific question, but I
thought it was worth mentioning that there's no reason that you can't do
"high throughput" access to HBase via Phoenix. Phoenix has parity for
most high-throughput approaches that you would have access to in HBase.
There is
Export is a MapReduce job, and HBase will only configure a maximum of
one Mapper per Region in the table being scanned.
If you have multiple regions for your tsdb table, then it's possible
that you need to tweak the concurrency on the YARN side such that you
have multiple Mappers running in
Looks like you're running a third party company's distribution of HBase.
I'd recommend you start by engaging them for support.
Architecturally, HBase does very little when there is no client load
applied to the system. If you're experiencing OOME's when the system is
idle, that sounds like
You were able to work around the durability concerns by skipping the WAL (never
forget that this means your data in HBase is *not* guaranteed to be there).
We’re already doing this. This is actually not a problem for us, because we
verify the data after the import (using our own restore-test
Your analysis seems pretty accurate so far. Ultimately, it sounds like
your SAN is the bottleneck here.
You were able to work around the durability concerns by skipping the WAL
(never forget that this means your data in HBase is *not* guaranteed to
be there).
It sounds like compactions are
The Apache HBase community does not provide any compatibility matrix
which includes operating systems. The compatibility matrix which HBase
does provide includes Java version, Hadoop version, and some other
expectations like SSH, DNS, and NTP.
Looks like you don't have the pthread library available. Did you make
sure you installed the necessary prerequisites for your operating system?
I'd suggest you take Hadoop compilation questions to the Hadoop user
mailing list for some more prompt answers.
On 4/13/21 1:35 AM, Ascot Moss
Would recommend you reach out to Cloudera Support if you're already
using CDH. They will be able to help you a more hands-on with steps to
find the busted procWAL(s) and recover.
On 4/7/21 2:11 AM, Marc Hoppins wrote:
Unfortunately, we are currently stuck using CDH 6.3.2 with Hbase 2.1.0.
`-DskipTests` is the standard Maven "ism" to skip tests. Your output
appears to indicate that you were running tests, so perhaps your
invocation was incorrect? You've not provided enough information for us
to know why exactly your build failed.
You do not have to build HBase from source in
sage-
From: Josh Elser
Sent: Tuesday, January 12, 2021 4:56 PM
To: user@hbase.apache.org
Subject: Re: Region server idle
EXTERNAL
Yes, in general, HDFS rebalancing will cause a decrease in the performance of
HBase as it removes the ability for HBase to short-circuit some read logic. It
sh
cluster
but one of the more important ones, I am not sure if we can finish up any RITs
to make the database 'passive' enough to perform a major compaction.
Once again, experience in this area may be giv8ing me misinformation.
-Original Message-
From: Josh Elser
Sent: Monday, January 11
The Master stacktrace you have there does read as a bug, but it
shouldn't be affecting balancing.
That Chore is doing work to apply space quotas, but your quota here is
only doing RPC (throttle) quotas. Might be something already fixed since
the version you're on. I'll see if anything jumps
+1
On 6/22/20 4:03 PM, Sean Busbey wrote:
We should change our use of these terms. We can be equally or more clear in
what we are trying to convey where they are present.
That they have been used historically is only useful if the advantage we
gain from using them through that shared context
https://hbase.apache.org/mail-lists.html
On 6/18/20 9:10 PM, Govindhan S wrote:
Hello Josh,
Great Day.
I don't see a subscribe option over there. Could you please educate me
more on this.
~ Govins
On Friday, 19 June, 2020, 02:17:13 am IST, Josh Elser
wrote:
Please subscribe
Please subscribe to the list so that you see when people reply to you.
https://lists.apache.org/thread.html/r68d91878bb6576850233bce83baa3479a19fedeeb32c76151c8c9abc%40%3Cuser.hbase.apache.org%3E
On 6/16/20 12:40 PM, Josh Elser wrote:
`hbase wal` requires you to provide options. You provided
I recall a version of HBase 2 where MasterProcWALs didn't get cleaned
up. Given the ID count in your pv2 wal file names is up to 200K's, I
would venture a guess that the master is just spinning to process a
bunch of old procedures.
You could try to move them to the side and be prepared to use
`hbase wal` requires you to provide options. You provided none, so the
command printed you the help message.
Please read the help message and provide the necessary ""
argument(s).
On 6/16/20 11:57 AM, Govindhan S wrote:
Hello Hbase Users,
I am a newbie to hbase. I do have a HDInsight
HBase (daemons) try to use a single connection for themselves. A RS also
does not need to mutate state in ZK to handle things like gets and puts.
Phoenix is probably the thing you need to look at more closely
(especially if you're using an old version of Phoenix that matches the
old HBase 1.1
+1 to the idea, -0 to the implied execution
I agree hbase-connectors is a better place for REST and thrift, long term.
My concern is that I read this thread as suggesting:
1. Remove rest/thrift from 2.3
1a. Proceed with 2.3.0 rc's
2. Add rest/thrift to hbase-connectors
...
n. Release
Per the guidance on the HBase book preface[1], I'll forward Barani's
question to the HBase private list. I'd kindly request no further
communication here until the question can be properly evaluated.
Thanks.
[1] https://hbase.apache.org/book.html#_preface
On 3/10/20 1:07 PM, Barani Bikshandi
Hi Junhong,
We don't run the Jira instance at issues.apache.org, we just use it. I
would suggest you contact the ASF infra team at us...@infra.apache.org.
They will have the ability to help debug this with you.
On Thu, Feb 13, 2020, 04:56 Junhong Xu wrote:
> Hello, guys:
>I am in China,
There have been multiple issues filed in Hadoop relating to the
implementation differences of IBM Java compared to Oracle Java and
OpenJDK [1]. Make sure that you're not running into any of them as a
first step.
After that, you'd want to compare the differences of the Java platforms,
with
They are not dead. I have personally gone through the efforts to keep
them alive under the Apache Phoenix PMC.
If you have an interest in them, please get involved :)
On 2/3/20 9:13 PM, Kang Minwoo wrote:
I looked around Apache Omid and Apache Tephra.
It seems like the dead.
Are there
this mailing list[1]. Users can subscribe to this
list by standard approach: mailto:user-zh-subscr...@hbase.apache.org.
- Josh (on behalf of the HBase PMC)
[1] https://hbase.apache.org/mail-lists.html
releases, we're always looking for more people to
help drive the release process. Those who can corral Jira issues, do
testing, and stage release candidates are very welcome and desired to
help make our releases happen on a regular cadence. If you have the
time/resources to help our, let us know
Hi Peng,
While we recognize that the Apache communities are global communities
where people speak all languages, the ASF requests that communication is
done in English[1]
Could you translate your original message for us, please?
[1]
Minor clarification -- Phoenix out of the box doesn't actually need
Tephra or Omid to support transactional index updates, but both of them
are options you can choose to use.
The implementation of this has recently changed as well -- read up at
owse/HBASE-20774
-Austin
On 11/1/19 8:04 AM, Wellington Chevreuil wrote:
Ah yeah, didn't realise it would assume same FS, internally. Indeed,
no way
to have rename working between different FSes.
Em qui, 31 de out de 2019 às 16:25, Josh Elser
escreveu:
Short answer: no, it will not work an
Hey Shuai,
You're likely to get some more traction with this question via
contacting Cloudera's customer support channels. We try to keep this
forum focused on Apache HBase versions.
If you are not seeing records after restoring, it sounds like there is
some (missing?) metadata in the old
Short answer: no, it will not work and you need to copy it to HDFS first.
IIRC, the bulk load code is ultimately calling a filesystem rename from
the path you provided to the proper location in the hbase.rootdir's
filesystem. I don't believe that an `fs.rename` is going to work across
You might get some more traction on user@phoenix since you're not really
asking an HBase specific question here.
Phoenix doesn't have any native capabilities to create/maintain
materialized views for you, but, if your data sets infrequently change,
you could manage that aspect on your own.
Deletes are held in memory. They represent data you have to traverse
until that data is flushed out to disk. When you write a new cell with a
qualifier of 10, that sorts, lexicographically, "early" with respect to
the other qualifiers you've written.
By that measure, if you are only scanning
Luoc was added already.
It's not clear to me if Nestor was also asking for an invitation, but I
sent one to them anyways.
On 7/21/19 8:11 PM, Néstor Boscán wrote:
On Sun, Jul 21, 2019 at 10:33 AM luoc wrote:
Hello,
I am interested to start to make contributions and I want to request
will be
in `hbase-assembly/target`.
- Josh
On 6/12/19 10:35 AM, Rebekah K. wrote:
Hello,
I was recently trying to install and run hbase with my hadoop
installation and was wanting to report the following error I ran into
and how I was able to solve it...
Hadoop Version: Hadoop 3.1.2
Hbase Version: 2.1.5
Reminds me of https://issues.apache.org/jira/browse/HBASE-21915 too.
Agree with Wei-Chiu that I'd start by ruling out HDFS issues first, and
then start worrying about HBase issues :)
On 6/1/19 8:05 PM, Wei-Chiu Chuang wrote:
I think i found a similar bug report that matches your symptom:
Hi Guillermo,
Yes, you are missing something.
TableInputFormat uses the Scan API just like Spark would.
Bypassing the RegionServer and reading from HFiles directly is
accomplished by using the TableSnapshotInputFormat. You can only read
from HFiles directly when you are using a Snapshot, as
://dataworkssummit.com/nosql-day-2019/
For those still on the fence, please the code NSD50 to get 50% off the
registration fee.
Thanks and see you there!
- Josh
Sounds like a bug to me.
On 5/7/19 5:52 AM, Kang Minwoo wrote:
Why do not use "doNotRetry" value in RemoteWithExtrasException?
보낸 사람: Kang Minwoo
보낸 날짜: 2019년 5월 7일 화요일 18:23
받는 사람: user@hbase.apache.org
제목: Why HBase client retry even though
Are you reading the log messages? I'm really struggling to understand
what is unclear given what you just included.
2019-04-04 04:27:14,029 FATAL [db-2:16000.activeMasterManager] master.HMaster:
Failed to become active master
org.apache.hadoop.security.AccessControlException: Permission
On 4/7/19 10:44 PM, melank...@synergentl.com wrote:
On 2019/04/04 15:15:37, Josh Elser wrote:
Looks like your RegionServer process might have died if you can't
connect to its RPC port.
Did you look in the RegionServer log for any mention of an ERROR or
FATAL log message?
On 4/4/19 8:20
Looks like your RegionServer process might have died if you can't
connect to its RPC port.
Did you look in the RegionServer log for any mention of an ERROR or
FATAL log message?
On 4/4/19 8:20 AM, melank...@synergentl.com wrote:
I have installed Hadoop single node
abstract. Of course, those talks which
are selected will receive a complimentary pass to attend the event.
Please reply to a single user list or to me directly with any questions.
Thanks!
- Josh
CVE-2019-0212: HBase REST Server incorrect user authorization
Description: In all previously released Apache HBase 2.x versions,
authorization was incorrectly applied to users of the HBase REST server.
Requests sent to the HBase REST server were executed with the
permissions of the REST
A superuser should be able to still initiate a compaction:
https://issues.apache.org/jira/browse/HBASE-17978
If the compaction didn't actually happen, that's a problem.
On 3/13/19 3:09 AM, Uma wrote:
-- Forwarded message -
From: Uma
Date: Wed 13 Mar, 2019, 6:54 AM
Subject:
).
Best regards,
Minwoo Kang
보낸 사람: Josh Elser
보낸 날짜: 2019년 2월 27일 수요일 01:32
받는 사람: user@hbase.apache.org
제목: Re: HBase client spent most time in ThreadPoolExecutor
Minwoo,
You have found an idle thread in the threadpool that is waiting for
work
Minwoo,
You have found an idle thread in the threadpool that is waiting for
work. This is not the source of your slowness. The thread is polling the
internal queue of work, waiting for the next "unit" of something to do.
You should include threads like these from your analysis.
On 2/26/19
Hi Jagan,
Right now, Authorization checks inside of the RegionServer aren't
well-quantified, but it is possible. One example of software that does
this today is Apache Ranger.
However, your plan to provide custom client-side data is going to take a
bit more effort as you'll also need to
Compared to 2.0.4, I believe you'll be better off moving onto HBase
2.1.2 at this point. IIRC, the consensus was to shift focus onto 2.1
(eventually, 2.2, and onward) instead of letting people get stuck on
"old" versions.
In general, I'd expect the 1.4 line to be rather bullet-proof, but
Hi Davis,
I don't think we have a release planned yet for the hbase-connectors
library. I know our mighty Stack has been doing lots of the heavy
lifting around lately.
If you're interested/willing, I'm sure we'd all be gracious if you have
the cycles to help out testing what we have in the
Please do not cross-post lists. I've dropped dev@hbase.
This doesn't seem like a replication issue. As you have described it, it
reads more like a data-correctness issue. However, I'd guess that it's
more related to timestamps rather than be an issue on your cluster.
If there was no error,
about what happened. Let me know if that would be
helpful. If we can't get to the bottom of how this happened, maybe we
can figure out why hbck couldn't fix it.
On 10/3/18 12:53 PM, Austin Heyne wrote:
Josh: Thanks for all your help! You got us going down a path that lead
to a solution.
Thought I
's getting these references from.
-Austin
On 09/30/2018 02:38 PM, Josh Elser wrote:
First off: You're on EMR? What version of HBase you're using? (Maybe
Zach or Stephen can help here too). Can you figure out the
RegionServer(s) which are stuck opening these PENDING_OPEN regions?
Can you get a jstack/thread
First off: You're on EMR? What version of HBase you're using? (Maybe
Zach or Stephen can help here too). Can you figure out the
RegionServer(s) which are stuck opening these PENDING_OPEN regions? Can
you get a jstack/thread-dump from those RS's?
In terms of how the system is supposed to work:
That thread is a part of the ThreadPool that HConnection uses and that
thread is simply waiting for a task to execute. It's not indicative of
any problem.
See how the thread is inside of a call to LinkedBlockingQueue#poll()
On 9/28/18 3:02 AM, Lalit Jadhav wrote:
While load testing in the
Please be patient in getting a response to questinos you post to this
list as we're all volunteers.
On 9/8/18 2:16 AM, onmstester onmstester wrote:
Hi, Currently I'm using Apache Cassandra as backend for my restfull application. Having a cluster of 30 nodes (each having 12 cores, 64gb ram and 6
1. Yes
2. HDFS NN pressure, read slow down, general poor performance
3. Default configuration is weekly, if you don't explicitly know some
reasons why weekly doesn't work, this is what you should follow ;)
4. No
I would be surprised if you need to do anything special with S3, but I
don't know
Manjeet -- you are still missing the fact that if you do not split your
table into multiple regions, your data will not be distributed.
Why do you think that your rowkey design means you can't split your table?
On 9/3/18 6:09 AM, Manjeet Singh wrote:
Hi Josh
Sharing steps and my findings
If it was related to maxClientCnxns, you would see sessions being
torn-down and recreated in HBase on that node, as well as a clear
message in the ZK server log that it's denying requests because the
number of outstanding connections from that host exceeds the limit.
ConnectionLoss is a
not mistaken the Normalizer will keep the same number of regions,
but will uniform the size, right? So if the goal is to reduce the number of
region, the Normalizer might not help?
JMS
Le ven. 31 août 2018 à 09:16, Josh Elser a écrit :
There's the Region Normalizer which I'd presume would
, 2018 at 9:11 PM Josh Elser wrote:
As I've been trying to explain in Slack:
1. Are you including the salt in the data that you are writing, such
that you are spreading the data across all Regions per their boundaries?
Or, as I think you are, just creating split points with this arbitrary
"
There's the Region Normalizer which I'd presume would be in an HBase 1.4
release
https://issues.apache.org/jira/browse/HBASE-13103
On 8/30/18 3:50 PM, Austin Heyne wrote:
I'm using HBase 1.4.4 (AWS/EMR) and I'm looking for an automated
solution because I believe there are going to be a few
As I've been trying to explain in Slack:
1. Are you including the salt in the data that you are writing, such
that you are spreading the data across all Regions per their boundaries?
Or, as I think you are, just creating split points with this arbitrary
"salt" and not including it when you
(-cc user@hbase, +bcc user@hbase)
How about the rest of the stacktrace? You didn't share the cause.
On 8/20/18 1:35 PM, Mich Talebzadeh wrote:
This was working fine before my Hbase upgrade to 1.2.6
I have Hbase version 1.2.6 and Phoenix
version apache-phoenix-4.8.1-HBase-1.2-bin
This
Nothing in here indicates why the RegionServers actually failed.
If the RegionServer crashed, there is very likely a log message at
FATAL. You want to find that to understand what actually caused it.
On 8/13/18 4:22 PM, Adep, Karankumar (ETW - FLEX) wrote:
Hi,
Region Server Crashes with
most used),
but we have the capacity to provide more than just that.
On 7/24/18 5:55 PM, Umesh Agashe wrote:
Thanks Stack, Josh and Andrew for your suggestions and concerns.
I share Stack's suggestions. This would be similar to hbase-thirdparty. The
new repo could be hbase-hbck/hbase-hbck2. As
Unless you are including the date+time in the rowKey yourself, no.
HBase has exactly one index for fast lookups, and that is the rowKey.
Any other query operation is (essentially) an exhaustive search.
On 7/11/18 12:07 PM, Ming wrote:
Hi, all,
Is there a way to get the last row
You might also need hbase.wal.meta_provider=filesystem (if you haven't
already realized that)
On 7/2/18 5:43 PM, Andrey Elenskiy wrote:
hbase.wal.provider
filesystem
Seems to fix it, but would be nice to actually try the fanout wal with
hadoop 2.8.4.
On Mon, Jul 2, 2018 at 1:03 PM, Andrey
CVE-2018-8025 describes an issue in Apache HBase that affects the
optional "Thrift 1" API server when running over HTTP. There is a
race-condition which could lead to authenticated sessions being
incorrectly applied to users, e.g. one authenticated user would be
considered a different user or
Use `mvn package`, not `compile`.
On 6/21/18 10:41 AM, Andrzej wrote:
W dniu 21.06.2018 o 19:01, Andrzej pisze:
Is any alternative to fast control HBase from C++ sources?
Or is Java client?
Native Client C++ (HBASE-14850) sources are old and mismatch to folly
library (Futures.h)
Now I
You shouldn't be putting the phoenix-client.jar on the HBase server
classpath.
There is specifically the phoenix-server.jar which is specifically built
to be included in HBase (to avoid issues such as these).
Please remove all phoenix-client jars and provide the
phoenix-5.0.0-server jar
Yep, you got it :)
Easy doc fix we can get in place.
On 5/14/18 2:25 PM, Kevin Risden wrote:
Looks like this might have triggered
https://issues.apache.org/jira/browse/HBASE-20581
Kevin Risden
On Mon, May 14, 2018 at 8:46 AM, Kevin Risden wrote:
We are using HDP 2.5
This question is better asked on the Phoenix users list.
The phoenix-client.jar is the one you need and is unique from the
phoenix-core jar. Logging frameworks are likely not easily
relocated/shaded to avoid issues which is why you're running into this.
Can you provide the error you're
We've received some requests to extend the CFP a few more days. The new
day of closing will be this Friday 2018/04/20, end of day.
Please keep them coming in!
On 4/15/18 9:23 PM, Josh Elser wrote:
The HBaseCon 2018 call for proposals is scheduled to close Monday, April
16th. If you have
/hbasecon-2018/
- Josh (on behalf of the HBase PMC)
(-to dev, +bcc dev, +to user)
Hi Stefano,
Moving your question over to the user@ mailing list as it's not so much
about development of HBase, instead development when using HBase.
Q1: what do you mean by the "latest field"? Are you talking about the
latest version of a Cell for a column
Oh, and the most important part:
Submit your talks here: https://easychair.org/conferences/?conf=hbasecon2018
On 4/9/18 10:26 PM, Josh Elser wrote:
Hi folks!
A gentle reminder that the HBaseCon 2018 call for proposals remains open
for just one more week -- until April 16th. The event is held
, content, and audience are
welcome, with just a few paragraphs required to tell us about what you
want to speak about.
Please feel free to reach out if there are any questions!
- Josh (on behalf of the HBase PMC)
Buffers, Google Guava, and the like.
This 2.1.0 release contains a number of updates in support of the
upcoming Apache HBase 2.0.0 release. The release is available through
dist.a.o[1] (as well as the mirror framework) and Maven central[2].
Release notes are also available [3].
- Josh (on behalf
Yes, you can bulk load into a table which already contains data.
The ideal case is that you generate HFiles which map exactly to the
distribution of Regions on your HBase cluster. However, given that we
know that Region boundaries can change, the bulk load client
(LoadIncrementalHFiles) has
).
Please find all available details for the event at [2], and feel free to
ask the d...@hbase.apache.org mailing list or myself any questions.
Thanks and start planning those talks!
- Josh (on behalf of the HBase PMC)
[1] https://easychair.org/conferences/?conf=hbasecon2018
[2] https
There was an HBase RPC connection from a client at the host (identified
by the IP:port you redacted). IIRC, the "read count=-1" is essentially
saying that the server tried to read from the socket and read no data
which means that the client has hung up. There were 33 other outstanding
HBase
ion=3.4.6 but in pom we have 3.4.10. I am
gonna
try rebuilding it with 3.4.10.
On 23 February 2018 at 00:29, Josh Elser <els...@apache.org> wrote:
This sounds like something I've seen in the past but was unable to
get
past. I think I was seeing it when the hbase-shaded-client was on
the
This sounds like something I've seen in the past but was unable to get
past. I think I was seeing it when the hbase-shaded-client was on the
classpath. Could you see if the presence of that artifact makes a
difference one way or another?
On 2/22/18 12:52 PM, sahil aggarwal wrote:
Yes, it is
The Apache Phoenix PMC is happy to announce the release of Phoenix
5.0.0-alpha for Apache Hadoop 3 and Apache HBase 2.0. The release is
available for download at here[1].
Apache Phoenix enables OLTP and operational analytics in Hadoop for low
latency applications by combining the power of
Hi Andrew,
Yes. The answer is, of course, that you should see consistent results
from HBase if there are no mutations in flight to that table. Whether
you're reading "current" or "back-in-time", as long as you're not
dealing with raw scans (where compactions may persist delete
tombstones),
Hey Kevin!
Looks like you got some good changes in here.
IMO, the HBase Thrift2 "implementation" makes more sense to me (I'm sure
there was a reason for having HTTP be involved at one point, but Thrift
today has the ability to do all of this RPC work for us). I'm not sure
what the HBase API
There is no such artifact with the groupId & artifactId
org.apache.hbase:hbase for Apache HBase. I assume would be the same for CDH.
You need the test jar from hbase-server if you want the
HBaseTestingUtility class.
On 1/5/18 10:23 AM, Debraj Manna wrote:
Cross posting from
-
Redundant power supplies are probably your next-best bet for running
without fsync (hsync). Something that can prevent a node from going down
hard will mitigate this issue for the most part.
The importance of this is often a multi-variable equation. The small
chance for data loss that exists
Thanks for sharing, Sahil.
A couple of thoughts at a glance:
* You should add a LICENSE to your project so people know how they can
(re)use your project.
* You have a dependency against 1.0.3, and I see at least one thing that
will not work against 2.0.0. Would be great if you wrote up what
The most reliably way (read-as, likely to continue working across HBase
releases) would probably be to implement a custom ReplicationEndpoint.
This would abstract away the logic behind "tail'ing of WALs" and give
you some nicer APIs to leverage. Beware that this would still be a
rather
Lalit,
Typically, questions about "vendor products" are best reserved for their
respective forums. This question is not relevant to the Apache HBase
community.
Please consider asking your question on
https://community.hortonworks.com/ instead.
- Josh
On 9/19/17 2:17 AM, La
FWIW, last I looked into this,
https://issues.apache.org/jira/browse/HBASE-15154 would be the long-term
solution to the Master also requiring the MaxDirectMemorySize
configuration (even when it is not acting as a RegionServer).
Obviously, it's a lower priority fix as there is a simple
e
indexes until your heart is content (space permitting of course).
-Original Message-
From: Andrzej [mailto:borucki_andr...@wp.pl]
Sent: Wednesday, August 30, 2017 11:03 AM
To: user@hbase.apache.org
Subject: Re: Fast search by any column
W dniu 30.08.2017 o 19:54, Dave Birdsall pisze:
As
You may find Apache Phoenix to be of use as you explore your requirements.
Phoenix provides a much higher-level API which provides logic to build
composite rowkeys (e.g. primary key constraints over multiple columns)
for you automatically. This would help you iterate much faster as you
better
1 - 100 of 159 matches
Mail list logo