As far as we are concerned a user can only search once per second, view a product once per second, etc so the keys are unique. If we were going to be extra paranoid I suppose we could use epoch in ms instead of seconds to ensure this constraint.

On 8/26/11 3:40 AM, Sheng Chen wrote:
Hi, Mark, just follow your question.
How do you make sure the uniqueness of the row key #{type}/#{user}/#{time}?
If the action logs are generated from different app servers, it is possible
to have several actions with the same type/user and timestamp.

Thanks.
Sean

2011/8/21 Mark<static.void....@gmail.com>

We are logging all user actions into hbase. These actions include searches,
product views and clicks.

We are currently storing them in one table with row keys like so:
"#{type}/#{user}/#{time}", where type is either click, search, view and user
is the current user logged in. Obviously using this method lead to region
hot spotting as the start of each key is fairly static. This got me to
thinking on what alternatives ways I could model this type of data and I was
hoping I could get some suggestions from the community.

Which would be more advisable?

1) Keep the current all logs go to one table pattern that is describe
above.
2) Keep the current all logs go to one table pattern that is describe above
but switch the type and user fields which would lead to more randomized keys
thus reducing hot spots
3) Create separate tables for each type of log we are saving... ie have
search table, click table, view table.

Our use case does not require us searching across multiple types so I'm
leaning towards #3 now but I was wondering if there were any cons to using
this method? Is it worse to have more tables than less?

Thanks for help

-M





Reply via email to