Are you using 0.18? I know that the copy HDFS to s3n isn't supported there
yet. I think there's a fix in 0.19.
/ Per
On Mon, Nov 24, 2008 at 2:11 AM, Alexander Aristov
[EMAIL PROTECTED] wrote:
Hi all
I am testing s3n file system facilities and try to copy from hdfs to S3 in
original format
it is supported
http://wiki.apache.org/hadoop/AmazonS3
and
http://issues.apache.org/jira/browse/HADOOP-930
Alexander
2008/11/26 Per Jacobsson [EMAIL PROTECTED]
Are you using 0.18? I know that the copy HDFS to s3n isn't supported
there
yet. I think there's a fix in 0.19.
/ Per
Change the name of the reduce method to be all lower case -- public void
reduce(...
Right now the compiler is complaining that you haven't overridden the
correct abstract method in the base class.
/ Per
On Sat, Nov 8, 2008 at 10:44 PM, pols cut [EMAIL PROTECTED] wrote:
I am trying to get a
We're doing the same thing, but doing the scheduling just with shell scripts
running on a machine outside of the Hadoop cluster. It works but we're
getting into a bit of scripting hell as things get more complex.
We're using distcp to first copy the files the jobs need from S3 to HDFS and
it
Hi all,
We've been running a pretty big job on 20 extra-large high-CPU EC2 servers
(Hadoop version 0.18, Java 1.6, the standard AMIs), and started getting the
dreaded Could not find any valid local directory error during the final
reduce phase.
I've confirmed that some of the boxes are running
Quick FYI: I've run the same job twice more without seeing the error.
/ Per
On Wed, Oct 1, 2008 at 11:07 AM, Per Jacobsson [EMAIL PROTECTED] wrote:
Hi everyone,
(apologies if this gets posted on the list twice for some reason, my first
attempt was denied as suspected spam)
I ran a job last
I've collected the syslogs from the failed reduce jobs. What's the best way
to get them to you? Let me know if you need anything else, I'll have to shut
down these instances some time later today.
Overall I've run this same job before with no problems. The only change is
the added gzip of the
Attached to the ticket. Hope this helps.
/ Per
On Wed, Oct 1, 2008 at 1:33 PM, Arun C Murthy [EMAIL PROTECTED] wrote:
On Oct 1, 2008, at 12:04 PM, Per Jacobsson wrote:
I've collected the syslogs from the failed reduce jobs. What's the best
way
to get them to you? Let me know if you need
If that's true, then can I set the number of Reducers very high
(even equal to the number of maps) to make Job C go faster?
This page has some good info on finding the right number of reducers:
http://wiki.apache.org/hadoop/HowManyMapsAndReduces
/ Per
On Fri, Sep 19, 2008 at 9:42 AM, Miles