On Thu, May 28, 2015 at 7:53 PM, Ted Yu wrote:
> What release of hbase are you using ?
>
I'm using 0.98.9. I should have mentioned it before anything else.
>
> If you trace 5fe111d3302cefb2e96d541a489ff026 in master log, do you find
> some clue ?
>
>From the timestamp of the logs, all these warn
Hey Ajay,
Your topic of discussion of too broad.
There are tons of comparison on HBase vs Cassandra:
https://www.google.com/search?q=hbase+vs+cassandra&ie=utf-8&oe=utf-8
Which one you should use, boils down to your use case? strong consistency?
range scans? need deeper integration with hadoop eco
See http://hbase.apache.org/book.html#perf.network.call_me_maybe
Cheers
On Fri, May 29, 2015 at 12:20 PM, Lukáš Vlček wrote:
> As for the #4 you might be interested in reading
> https://aphyr.com/posts/294-call-me-maybe-cassandra
> Not sure if there is comparable article about HBase (anybody kn
funny, i was just on a con-call with a hortonworks engineer. his take was
that if you need/want to be part of a wider hadoop ecosystem, HBase.
otherwise it was pretty much a wash
john
On Fri, May 29, 2015 at 3:12 PM, Ajay wrote:
> Hi,
>
> I need some info on Hbase vs Cassandra as a data store
As for the #4 you might be interested in reading
https://aphyr.com/posts/294-call-me-maybe-cassandra
Not sure if there is comparable article about HBase (anybody knows?) but it
can give you another perspective about what else to keep an eye on
regarding these systems.
Regards,
Lukas
On Fri, May 2
Hi,
I need some info on Hbase vs Cassandra as a data store (in general plus
specific to time series data).
The comparison in the following helps:
1: features
2: deployment and monitoring
3: performance
4: anything else
Thanks
Ajay
A colleague of mine has a question about scalability and connections to
HBase.
We’d like to use the label-based controls for our content. Those labels
are tied to users and users are specified on connections (not when getting
the HBase table, which is really too bad because if they were I wouldn’t
Yes we are running MR on hbase. I tried running MR on snapshot but the data
in our HBase changes very frequently and we end up occupying twice the
space and end up running into full disks.
I think we are hitting large HBase heap and M/R problem. I will try to add
some more space to our cluster and
Is there any reason for 27G heap? It seems you run M/R job? If yes, then I
would recommend you trying M/R over snapshots. Combination of large HBase
heap and M/R is very hard to tune if possible at all
You can also try reducing number of map tasks and check your MR job
resource consumption
On May
That one long GC aside, look at the timings of the others as well. Even
the smaller GCs are taking up the majority of each second.
For a heap that size you might want to try a java version over java7u60 and
use the G1GC. Otherwise there are a bunch of resources on the web
including in the refgui
This is a sample from gc log file. At the end I see long gc pauses. Is
there a way I can tune this ?
2015-04-29T22:46:12.387+: 98061.660: [GC2015-04-29T22:46:12.387+:
98061.661: [ParNew: 572757K->63867K(580608K), 0.6549550 secs]
13294553K->12811090K(20001132K), 0.6551600 secs] [Times: user
>
> 2014-08-14 21:35:16,740 WARN org.apache.hadoop.hbase.util.Sleeper: We
> slept
> 14912ms instead of 3000ms, this is likely due to a long garbage collecting
> pause and it's usually bad, see
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
I would check your gc logs for long gc
Hi All,
In our cluster region server logs are filled with response too slow
message. This is causing jobs to slow down. How can I debug what is the
reason for this slowness.
We have enabled short circuit reads and region server has 27GB RAM.
Here is a trace when regionserver starts.
Thu Aug 14
I see; we're still on 0.98, will verify this once upgrade the hbase; Thanks
for all the info!
2015-05-29 1:02 GMT+08:00 Nick Dimiduk :
> On Thu, May 28, 2015 at 12:10 AM, ShaoFeng Shi
> wrote:
>
> > Hi Ted, thanks for giving the link, our scenario is just such a case;
> We're
> > looking forward
14 matches
Mail list logo