I use data to generates reports on daily basis, Do couple of analysis and
its insert once and read many on daily basis.  But My main purpose is to
secure my data and easily recover it even if my hadoop(datanode) OR HDFS
crashes. As uptill now, i'm using approach in which data has been retrieved
directly from HDFS and few days back my hadoop crashes and when i repair
it, i was unable to recover my Old data which resides on HDFS. So please
let me know do i have to make architectural change OR is there any way to
recover data which resides in crashed HDFS


On Wed, Jul 17, 2013 at 11:00 PM, Nitin Pawar <nitinpawar...@gmail.com>wrote:

> what's the purpose of data storage?
> whats the read and write throughput you expect?
> whats the way you will access data while read?
> whats are your SLAs on both read and write?
>
> there will be more questions others will ask so be ready for that :)
>
>
>
> On Wed, Jul 17, 2013 at 11:10 PM, Hamza Asad <hamza.asa...@gmail.com>wrote:
>
>> Please let me knw which approach is better. Either i save my data
>> directly to HDFS and run hive (shark) queries over it OR store my data in
>> HBASE, and then query it.. as i want to ensure efficient data retrieval and
>> data remains safe and can easily recover if hadoop crashes.
>>
>> --
>> *Muhammad Hamza Asad*
>>
>
>
>
> --
> Nitin Pawar
>



-- 
*Muhammad Hamza Asad*

Reply via email to