Re: Namenode Exceptions with S3

2008-07-11 Thread slitz
, in my case i cannot use S3 at all for now because of these 2 problems. Any advice? slitz On Fri, Jul 11, 2008 at 4:31 PM, Lincoln Ritter [EMAIL PROTECTED] wrote: Thanks Tom! Your explanation makes things a lot clearer. I think that changing the 'fs.default.name' to something like

Re: Using S3 Block FileSystem as HDFS replacement

2008-07-01 Thread slitz
. What should be the problem here? And how may i access the FileSystem with bin/hadoop fs ... if the default filesystem isn't the S3? thank you very much :) slitz On Tue, Jul 1, 2008 at 4:43 PM, Chris K Wensel [EMAIL PROTECTED] wrote: by editing the hadoop-site.xml, you set the default. but I

Using S3 Block FileSystem as HDFS replacement

2008-06-30 Thread slitz
confused? slitz

MultipleOutputFormat example

2008-06-25 Thread slitz
someone please show me a quick example of how to use this class or MultipleOutputFormat subclasses in general? i'm somewhat lost... slitz

Re: MultipleOutputFormat example

2008-06-25 Thread slitz
to know how to use this kind of things in hadoop as this could help me understand other classes and patterns. So it would be great if someone could give me an example of how to use it. slitz On Wed, Jun 25, 2008 at 7:53 PM, montag [EMAIL PROTECTED] wrote: Hi, You should check out

Using NFS without HDFS

2008-04-11 Thread slitz
can access the NFS shared, and the path to the share is /home/slitz/warehouse in all three. My hadoop-site.xml file were copied over all nodes and looks like this: configuration property namefs.default.name/name valuelocal/value description The name of the default file system. Either

Re: Using NFS without HDFS

2008-04-11 Thread slitz
I've read in the archive that it should be possible to use any distributed filesystem since the data is available to all nodes, so it should be possible to use NFS, right? I've also read somewere in the archive that this shoud be possible... slitz On Fri, Apr 11, 2008 at 1:43 PM, Peeyush

Re: Using NFS without HDFS

2008-04-11 Thread slitz
Thank you for the file:/// tip, i was not including it in the paths. I'm running the example with this line - bin/hadoop jar hadoop-*-examples.jar grep file:///home/slitz/warehouse/input file:///home/slitz/warehouse/output 'dfs[a-z.]+' But i'm getting the same error as before, i'm getting

Different output classes from map and reducer

2008-02-27 Thread slitz
please point me the right direction? Thank you slitz