Please find the data requirements for our use case below :
Raw data processing
--
1. Data is populated into hdfs , after etl around 3 billion puts per day in
to hbase
2. Oldest data after X days to be deleted from hbase
Aggregates processing
--
Looking at the history of HBASE-9435, Francis offered to submit a patch for
refguide.
There was no JIRA linked to HBASE-9435 on the doc update though.
Cheers
On Tue, Feb 4, 2014 at 5:38 PM, Demai Ni wrote:
> Ted and all,
>
> I finally figured it out that hbase 9435 introduced the incompatible
Ted and all,
I finally figured it out that hbase 9435 introduced the incompatible change
to 0.96.0. like remove '@', add '[]' , etc.
Well, when I google 'hbase rest', the documentations are still showing the
old syntax. :-)
Maybe http://wiki.apache.org/hadoop/Hbase/Stargate can add an example?
Do you feel up to creating an RPM spec for 0.96 and later?
If so, I don't think anybody would be opposed to committing into dev-support or
something.
-- Lars
- Original Message -
From: Ishan Chhabra
To: user@hbase.apache.org
Cc:
Sent: Tuesday, February 4, 2014 2:26 PM
Subject: Buildi
Hi,
We do a rpm based deploy to centos servers. HBase till 0.94 used to have a
functional rpm spec and a rpm profile, but looks like it is gone from 0.96
onwards.
Was this removed intentionally? Is there now another way to build rpms?
--
*Ishan Chhabra *| Rocket Scientist | RocketFuel Inc.
Yes,
1. What is the expected avg and peak load in writes/updates/deletes/reads?
2. What is the average size of a KV?
3. Reads/small scans/medium/large scan %%
4. Do you plan M/R jobs, Hive query?
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vr
I guess you'd better describe a little bit more about your applications.
Does the data increase over the time at all?
Nick
On Tue, Feb 4, 2014 at 5:22 AM, suresh babu wrote:
> Hi folks,
>
> We are trying to setup HBase cluster for the following requirement:
>
> We have to maintain data of size
Hi folks,
We are trying to setup HBase cluster for the following requirement:
We have to maintain data of size around 800TB,
For the above requirement,please suggest me the best hardware configuration
details like
1)how many disks to consider for machine and the capacity of disks ,for
example,
Did you create the table prior to launching your program ?
If so, when you scan hbase:meta table, do you see row(s) for it ?
Cheers
On Feb 4, 2014, at 12:53 AM, Murali wrote:
> Hi Ted,
>
> I am trying your solution. But I got the same error message.
>
> Thanks
>
>
>
>
>
>
Another thing also, I don't know if 0.92 handled compression since I used
it for only few weeks, but if you used Snappy or other codecs not coming
with the default release, you might want to give it a close look before
migrating since it might not be as straight forward as the HFile upgrade.
JM
Hi Ted,
I am trying your solution. But I got the same error message.
Thanks
11 matches
Mail list logo