Store Large files/images HBase

2009-10-19 Thread Luis Carlos Junges
Hi, I am currently doing some research on distributed database that can be scaled easily in terms of storage capacity. The reason is to use it on the brazilian federal project called "portal do aluno" wich will have around 10 million kids accessing it monthly. The idea is to build a portal simila

RE: Question about MapReduce

2009-10-19 Thread Doug Meil
Hi there- I didn't see the option in the thread yet which seems pretty straightforward: When setting up the job: Job job = new Job(conf, "my job"); ... conf.setStrings("param", "param1"); And then in the map method: String paramVal = context.getConfiguration().get(

Re: Question about MapReduce

2009-10-19 Thread Something Something
Interesting... I haven't tried this yet.. but in this case what would you specify as the 'InputPath'? I was under the assumption that a Job needs some kind of InputPath.. no? I don't see a NullInputPath. Is there something equivalent? From: Doug Meil To: "

RE: Question about MapReduce

2009-10-19 Thread Doug Meil
The job still needs everything else (input path, output path, mapper-class, etc.). This is trying to address the parameter-passing question of how parameters can be passed to mappers/reducers. Correction: there is a bug in my example... Since I'm reading the parameter with a 'get() ' I shoul

org.apache.hadoop.util.DiskChecker$DiskErrorException

2009-10-19 Thread Harshit Kumar
Hi I get the following error on executing a map-reduce application on EC2. Can any one please pass suggest some pointers what is going wrong? I tried searching the mailing list, but couldnt find any help regarding this type of error. *[r...@ip-10-243-47-69 hadoop-0.19.0]# bin/hadoop jar lubm.jar