, in my case i cannot use S3 at all for now because of these 2 problems.
Any advice?
slitz
On Fri, Jul 11, 2008 at 4:31 PM, Lincoln Ritter [EMAIL PROTECTED]
wrote:
Thanks Tom!
Your explanation makes things a lot clearer. I think that changing
the 'fs.default.name' to something like
.
What should be the problem here? And how may i access the FileSystem with
bin/hadoop fs ... if the default filesystem isn't the S3?
thank you very much :)
slitz
On Tue, Jul 1, 2008 at 4:43 PM, Chris K Wensel [EMAIL PROTECTED] wrote:
by editing the hadoop-site.xml, you set the default. but I
confused?
slitz
someone please show me a quick example of how to use this class or
MultipleOutputFormat subclasses in general? i'm somewhat lost...
slitz
to know how to use this kind of things in hadoop
as this could help me understand other classes and patterns.
So it would be great if someone could give me an example of how to use it.
slitz
On Wed, Jun 25, 2008 at 7:53 PM, montag [EMAIL PROTECTED] wrote:
Hi,
You should check out
can access the NFS shared, and the path to the
share is /home/slitz/warehouse in all three.
My hadoop-site.xml file were copied over all nodes and looks like this:
configuration
property
namefs.default.name/name
valuelocal/value
description
The name of the default file system. Either
I've read in the archive that it should be possible to use any distributed
filesystem since the data is available to all nodes, so it should be
possible to use NFS, right?
I've also read somewere in the archive that this shoud be possible...
slitz
On Fri, Apr 11, 2008 at 1:43 PM, Peeyush
Thank you for the file:/// tip, i was not including it in the paths.
I'm running the example with this line - bin/hadoop jar
hadoop-*-examples.jar grep file:///home/slitz/warehouse/input
file:///home/slitz/warehouse/output 'dfs[a-z.]+'
But i'm getting the same error as before, i'm getting
please point me the right direction?
Thank you
slitz