Here is what is happening directly from the ec2 screen. The ID and Secret Key 
are the only things changed.
  I'm running hadoop 15.3 from the public ami. I launched a 2 machine cluster 
using the ec2 scripts in  the src/contrib/ec2/bin . . .
  The file I try and copy is 9KB (I noticed previous discussion on empty files 
and files that are > 10MB)
   
  >>>>> First I make sure that we can copy the file from s3
  [EMAIL PROTECTED] hadoop-0.15.3]# bin/hadoop fs -copyToLocal s3://ID:[EMAIL 
PROTECTED]/InputFileFormat.xml /usr/InputFileFormat.xml
  >>>>> Now I see that the file is copied to the ec2 master (where I'm logged 
in)
  [EMAIL PROTECTED] hadoop-0.15.3]# dir /usr/Input*
  /usr/InputFileFormat.xml
   
  >>>>> Next I make sure I can access the HDFS and that the input directory is 
there
  [EMAIL PROTECTED] hadoop-0.15.3]# bin/hadoop fs -ls /
  Found 2 items
  /input <dir> 2008-04-01 15:45
  /mnt <dir> 2008-04-01 15:42
  [EMAIL PROTECTED] hadoop-0.15.3]# bin/hadoop fs -ls /input/
  Found 0 items
   
  >>>>> I make sure hadoop is running just fine by running an example
  [EMAIL PROTECTED] hadoop-0.15.3]# bin/hadoop jar hadoop-0.15.3-examples.jar 
pi 10 1000
  Number of Maps = 10 Samples per Map = 1000
  Wrote input for Map #0
  Wrote input for Map #1
  Wrote input for Map #2
  Wrote input for Map #3
  Wrote input for Map #4
  Wrote input for Map #5
  Wrote input for Map #6
  Wrote input for Map #7
  Wrote input for Map #8
  Wrote input for Map #9
  Starting Job
  08/04/01 17:38:14 INFO mapred.FileInputFormat: Total input paths to process : 
10
  08/04/01 17:38:14 INFO mapred.JobClient: Running job: job_200804011542_0001
  08/04/01 17:38:15 INFO mapred.JobClient: map 0% reduce 0%
  08/04/01 17:38:22 INFO mapred.JobClient: map 20% reduce 0%
  08/04/01 17:38:24 INFO mapred.JobClient: map 30% reduce 0%
  08/04/01 17:38:25 INFO mapred.JobClient: map 40% reduce 0%
  08/04/01 17:38:27 INFO mapred.JobClient: map 50% reduce 0%
  08/04/01 17:38:28 INFO mapred.JobClient: map 60% reduce 0%
  08/04/01 17:38:31 INFO mapred.JobClient: map 80% reduce 0%
  08/04/01 17:38:33 INFO mapred.JobClient: map 90% reduce 0%
  08/04/01 17:38:34 INFO mapred.JobClient: map 100% reduce 0%
  08/04/01 17:38:43 INFO mapred.JobClient: map 100% reduce 20%
  08/04/01 17:38:44 INFO mapred.JobClient: map 100% reduce 100%
  08/04/01 17:38:45 INFO mapred.JobClient: Job complete: job_200804011542_0001
  08/04/01 17:38:45 INFO mapred.JobClient: Counters: 9
  08/04/01 17:38:45 INFO mapred.JobClient: Job Counters 
  08/04/01 17:38:45 INFO mapred.JobClient: Launched map tasks=10
  08/04/01 17:38:45 INFO mapred.JobClient: Launched reduce tasks=1
  08/04/01 17:38:45 INFO mapred.JobClient: Data-local map tasks=10
  08/04/01 17:38:45 INFO mapred.JobClient: Map-Reduce Framework
  08/04/01 17:38:45 INFO mapred.JobClient: Map input records=10
  08/04/01 17:38:45 INFO mapred.JobClient: Map output records=20
  08/04/01 17:38:45 INFO mapred.JobClient: Map input bytes=240
  08/04/01 17:38:45 INFO mapred.JobClient: Map output bytes=320
  08/04/01 17:38:45 INFO mapred.JobClient: Reduce input groups=2
  08/04/01 17:38:45 INFO mapred.JobClient: Reduce input records=20
  Job Finished in 31.028 seconds
  Estimated value of PI is 3.1556
   
  >>>>> Finally, I try and copy the file over
  [EMAIL PROTECTED] hadoop-0.15.3]# bin/hadoop distcp s3://ID:[EMAIL 
PROTECTED]/InputFileFormat.xml /input/InputFileFormat.xml
  With failures, global counters are inaccurate; consider running with -i
  Copy failed: org.apache.hadoop.mapred.InvalidInputException: Input source 
s3://ID:[EMAIL PROTECTED]/InputFileFormat.xml does not exist.
  at org.apache.hadoop.util.CopyFiles.copy(CopyFiles.java:470)
  at org.apache.hadoop.util.CopyFiles.run(CopyFiles.java:550)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
  at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:563)
   
   
   
   
   
   
   
   
  
--------------------------------------------------------------------------------

[EMAIL PROTECTED] wrote:
  > That was a typo in my email. I do have s3:// in my command when it fails.


Not sure what's wrong. Your command looks right to me. Would you mind to show 
me the exact error message you see?

Nicholas



       
---------------------------------
You rock. That's why Blockbuster's offering you one month of Blockbuster Total 
Access, No Cost.

Reply via email to