tic update of the cached data by the database engine. Of
> course HBase
> does not do that.
>
> Hope that helped,
>
> - Andy
>
>
>
> - Original Message
> > From: Hua Su
> > To: hbase-user@hadoop.apache.org
> > Sent: Wed, March 10, 2010 1
gt; From: Hua Su
> To: hbase-user@hadoop.apache.org
> Sent: Wed, March 10, 2010 1:01:33 AM
> Subject: Re: Use cases of HBase
>
> Hi Purtel,
>
> What do you mean by "Since 0.20.0, results of analytic computations over the
> data can be materialized and served out in real time in
less cost. Since 0.20.0, results of analytic computations over
> the data can be materialized and served out in real time in response to
> queries. This is a complete solution.
>
>
> - Andy
>
>
>
> - Original Message
> > From: Ryan Rawson
&g
sponse to
>> queries. This is a complete solution.
>>
>> - Andy
>>
>>
>>
>>
>> - Original Message
>>> From: Ryan Rawson
>>> To: hbase-u...@hadoop.apac...
>>
>>> Sent: Tue, March 9, 2010 3:34:55 PM
>
t; - Andy
>
>
>
>
> - Original Message ----
> > From: Ryan Rawson
> > To: hbase-u...@hadoop.apac...
>
> > Sent: Tue, March 9, 2010 3:34:55 PM
> > Subject: Re: Use cases of HBase
> >
>
> > HBase operates more like a write-thru cac
solution.
- Andy
- Original Message
> From: Ryan Rawson
> To: hbase-u...@hadoop.apac...
> Sent: Tue, March 9, 2010 3:34:55 PM
> Subject: Re: Use cases of HBase
>
> HBase operates more like a write-thru cache. Recent writes are in
> memory (aka memstore). Older...
.0, results of analytic computations over
> the data can be materialized and served out in real time in response to
> queries. This is a complete solution.
>
> - Andy
>
>
>
> - Original Message
> > From: Ryan Rawson
> > To: hbase-user@hadoop.apache.org
al time in response to queries. This is a complete solution.
- Andy
- Original Message
> From: Ryan Rawson
> To: hbase-user@hadoop.apache.org
> Sent: Tue, March 9, 2010 3:34:55 PM
> Subject: Re: Use cases of HBase
>
> HBase operates more like a write-thru cache. R
simpler KV-style stores out there like
Tokyo
>> > Cabinet, Memcached, or BerkeleyDB, the in-memory ones like Redis.
>> >
>> > JG
>> >
>> > -Original Message-
>> > From: jaxzin [mailto:brian.r.jack...@espn3.com]
>> &
ook at some of the simpler KV-style stores out there like Tokyo
>> > Cabinet, Memcached, or BerkeleyDB, the in-memory ones like Redis.
>> >
>> > JG
>> >
>> > -Original Message-
>> > From: jaxzin [mailto:brian.r.jack...@espn3.com]
&
the in-memory ones like Redis.
> >
> > JG
> >
> > -Original Message-
> > From: jaxzin [mailto:brian.r.jack...@espn3.com]
> > Sent: Tuesday, March 09, 2010 12:09 PM
> > To: hbase-user@hadoop.apache.org
> > Subject: Re: Use cases of HBase
> >
&g
es out there like Tokyo
> Cabinet, Memcached, or BerkeleyDB, the in-memory ones like Redis.
>
> JG
>
> -Original Message-
> From: jaxzin [mailto:brian.r.jack...@espn3.com]
> Sent: Tuesday, March 09, 2010 12:09 PM
> To: hbase-user@hadoop.apache.org
> Subject: Re:
impler KV-style stores out there like Tokyo
Cabinet, Memcached, or BerkeleyDB, the in-memory ones like Redis.
JG
-Original Message-
From: jaxzin [mailto:brian.r.jack...@espn3.com]
Sent: Tuesday, March 09, 2010 12:09 PM
To: hbase-user@hadoop.apache.org
Subject: Re: Use cases of HBase
Ga
Gary, I looked at your presentation and it was very helpful. But I do have a
few unanswered questions from it if you wouldn't mind answering them. How
big is/was your cluster that handled 3k req/sec? And what were the specs on
each node (RAM/CPU)?
When you say latency can be good, what you
Thanks Charles, I definitely realize that HBase might be a wrong fit because
of our current data size, but I'm still interested for the other benefits it
provides. And I'm hoping if we use HBase it can be the paradigm shift to
keep more data around since we want all that 'record of activity' stuf
Slightly off topic, but we have similar requirements as you and NDBD is
working great. As far as latency goes you can definitely see millisecond or
less response times using the NDB api. Your throughput requirements should
be a piece of cake as well. 10GB is definitely not "big data" and 1-2 ms
Thanks Gary, this is great!
I'm designing a central store/service for all user data for the fantasy
section of ESPN.com (profile/preferences/record of activity, you name it).
The record-of-activity wouldn't be on a page view granularity but more like
"created a league" or "won a trophy" type act
Hey Brian,
We use HBase to complement MySQL in serving activity-stream type data here
at Meetup. It's handling real-time requests involved in 20-25% of our page
views, but our latency requirements aren't as strict as yours. For what
it's worth, I did a presentation on our setup which will hopefu
This is exactly the kind of feedback I'm looking for thanks, Barney.
So its sounds like you cache the data you get from HBase in a session-based
memory? Are you using a Java EE HttpSession? (I'm less familiar with
django/rails equivalent but I'm assuming they exist) Or are you using a
memory ca
I am using Hbase to store visitor level clickstream-like data. At the
beginning of the visitor session I retrieve all the previous session data
from hbase and use it within my app server and massage it a little and serve
to the consumer via web services. Where I think you will run into the most
p
20 matches
Mail list logo