Hi,

quoting from the documentation at
http://hadoop.apache.org/core/docs/current/hdfs_design.html

Simple Coherency Model

HDFS applications need a write-once-read-many access model for files. A file
once created, written, and closed need not be changed. This assumption
simplifies data coherency issues and enables high throughput data access. A
MapReduce application or a web crawler application fits perfectly with this
model. There is a plan to support appending-writes to files in the future.


I was wondering whether this plan ever implemented.

appending-writes are must in my application. and reliable,searchable and
mapreduable distributed system is a must as well.
thus I looked at hadoop thinking it might be the right platform for the
project.

-- 
Tzury Bar Yochay

Reply via email to