Hi team,

 

When I try to do some research on Hadoop, I have several high level
questions, if any comments from you it will do great help for me:

 

1. Hadoop assumes the files are big files, but take Google as an example, it
seems the google result for user are small files, so how to understand the
big files?And what’s the file content for example?

 

2. Why are the files write-once and read-many times?

 

3. How to install other softwares on Hadoop, is there any special
requirements for the software? Do they need to support the Map/Reduce module
and then can be installed? 

 

It will be very appreciated for your help.

 

王 琴  Annie.Wang

 

上海市徐汇区桂林路418号7号楼6楼
Zip code: 200 233
Tel:      +86 21 5497 8666-8004
Fax:     +86 21 5497 7986
Mobile:  +86 137 6108 8369

 

Reply via email to