Hi Niels and Bertrand,
    Thank you for you great advices.
    In our scenario, we need to store a steady stream of binary data into a
circular storage,throughput and concurrency are the most important
indicators.The first way seems work, but as  hdfs is not friendly for small
files, this approche may be not smooth enough.HBase is good, but  not
appropriate for us, both for throughput and storage.mongodb is quite good
for web applications, but not suitable the scenario we meet all the same.
    we need a distributed storage system,with Highe throughput, HA,LB and
secure. Maybe It act much like hbase, manager a lot of small file(hfile) as
a large region. we manager a lot of small file as a large one. Perhaps we
should develop it by ourselives.

Thank you.
Lin Wukang


2013/7/25 Niels Basjes <ni...@basjes.nl>

> A circular file on hdfs is not possible.
>
> Some of the ways around this limitation:
> - Create a series of files and delete the oldest file when you have too
> much.
> - Put the data into an hbase table and do something similar.
> - Use completely different technology like mongodb which has built in
> support for a circular buffer (capped collection).
>
> Niels
>
> Hi all,
>    Is there any way to use a hdfs file as a Circular buffer? I mean, if I set 
> a quotas to a directory on hdfs, and writting data to a file in that 
> directory continuously. Once the quotas exceeded, I can redirect the writter 
> and write the data from the beginning of the file automatically .
>
>

Reply via email to