Re: Map/reduce with input files on S3

2008-03-26 Thread Prasan Ary
Owen, Yes I am using Hadoop 0.16.1 . No, the jira doesn't relate to my case. The message Hook previously registered comes up only if I try to access files on S3 from my java application running on EC2 . The same application runs smoothly if the input file is copied to image on EC2 and

Re: Map/reduce with input files on S3

2008-03-26 Thread Tom White
I wonder if it is related to: https://issues.apache.org/jira/browse/HADOOP-3027 I think it is - the same problem is fixed for me when using HADOOP-3027. Tom

Re: Map/reduce with input files on S3

2008-03-26 Thread Prasan Ary
I changed the configuration a little so that the MR jar file now runs on my local hadoop cluster, but takes input files from S3. I get the following output: 08/03/26 17:32:39 INFO mapred.FileInputFormat: Total input paths to process : 1 08/03/26 17:32:44 INFO mapred.JobClient: Running

Map/reduce with input files on S3

2008-03-25 Thread Prasan Ary
I am running hadoop on EC2. I want to run a jar MR application on EC2 such that input and output files are on S3. I configured hadoop-site.xml so that fs.default.name property points to my s3 bucket with all required identifications (eg; s3://ID:secret key@bucket ). I created an input

Re: Map/reduce with input files on S3

2008-03-25 Thread Otis Gospodnetic
core-user@hadoop.apache.org Sent: Tuesday, March 25, 2008 4:07:15 PM Subject: Map/reduce with input files on S3 I am running hadoop on EC2. I want to run a jar MR application on EC2 such that input and output files are on S3. I configured hadoop-site.xml so that fs.default.name property