e error, i was starting namenode and datanodes and changing
fs.default.name to s3://bucket/ after that, now i understand why it doesn't
work.
Thank you *very* much for your help, now i can use EC2 and S3 :)
slitz
On Fri, Jul 11, 2008 at 10:46 PM, Tom White <[EMAIL PROTECTED]> wrote:
&
P-3733
So, in my case i cannot use S3 at all for now because of these 2 problems.
Any advice?
slitz
On Fri, Jul 11, 2008 at 4:31 PM, Lincoln Ritter <[EMAIL PROTECTED]>
wrote:
> Thanks Tom!
>
> Your explanation makes things a lot clearer. I think that changing
> the '
I'm having the exact same problem, any tip?
slitz
On Wed, Jul 2, 2008 at 12:34 AM, Lincoln Ritter <[EMAIL PROTECTED]>
wrote:
> Hello,
>
> I am trying to use S3 with Hadoop 0.17.0 on EC2. Using this style of
> configuration:
>
>
> fs.default.name
>
nt example but the error were
always the same.
What should be the problem here? And how may i access the FileSystem with
"bin/hadoop fs ..." if the default filesystem isn't the S3?
thank you very much :)
slitz
On Tue, Jul 1, 2008 at 4:43 PM, Chris K Wensel <[EMAIL PROTECTED]>
ript right? or am I completely confused?
slitz
ke to know how to use this kind of things in hadoop
as this could help me understand other classes and patterns.
So it would be great if someone could give me an example of how to use it.
slitz
On Wed, Jun 25, 2008 at 7:53 PM, montag <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> Y
Could someone please show me a quick example of how to use this class or
MultipleOutputFormat subclasses in general? i'm somewhat lost...
slitz
Thank you for the file:/// tip, i was not including it in the paths.
I'm running the example with this line -> bin/hadoop jar
hadoop-*-examples.jar grep file:///home/slitz/warehouse/input
file:///home/slitz/warehouse/output 'dfs[a-z.]+'
But i'm getting the same error
I've read in the archive that it should be possible to use any distributed
filesystem since the data is available to all nodes, so it should be
possible to use NFS, right?
I've also read somewere in the archive that this shoud be possible...
slitz
On Fri, Apr 11, 2008 at 1:43 P
nodes can access the NFS shared, and the path to the
share is /home/slitz/warehouse in all three.
My hadoop-site.xml file were copied over all nodes and looks like this:
fs.default.name
local
The name of the default file system. Either the literal string
"local" or a hos
Hello,
it worked like a charm! thank you :)
slitz
On Thu, Feb 28, 2008 at 5:51 PM, Johannes Zillmann <[EMAIL PROTECTED]> wrote:
> Hi Slitz,
>
> try
> conf.setMapOutputValueClass(Text.class);
> conf.setMapOutputKeyClass(Text.class);
> conf.setO
.hadoop.mapred.MapTask.run(MapTask.java:192)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1804)
i'm just trying to modify slightly the wordcount example to fit my needs but
i keep getting this kind of errors.
Can somebody please point me the right direction?
Thank you
slitz
12 matches
Mail list logo