Hi friends:   I has two question.   first one is:   I use libhdfs's hflush to 
flush my data to a file, in same process context I can read it. But I find that 
file unchanged if I check from hadoop shell ―― it's len is zero( check by 
"hadoop fs -ls xxx" or read it in program); however when I reboot hdfs, I can 
read that file's content that I flushed again。 why ?    can I hflush data to 
file without close it,at same time read data flushed by other process ?       
second one is:     does once close hdfs file, the last written block is 
untouched. even open that file with append mode, namenode will alloc a new 
block to for append data?   I find if I close file and open it with append mode 
again and again. hdfs report will show "used space much more that the file 
logic size"
   btw: I use cloudera ch2     Thanks a 
lottttttttttttttttttttttttttttttttttttttkanghua                                 
          

Reply via email to