Re: Namenode Exceptions with S3

2008-07-16 Thread slitz
e error, i was starting namenode and datanodes and changing fs.default.name to s3://bucket/ after that, now i understand why it doesn't work. Thank you *very* much for your help, now i can use EC2 and S3 :) slitz On Fri, Jul 11, 2008 at 10:46 PM, Tom White <[EMAIL PROTECTED]> wrote: &

Re: Namenode Exceptions with S3

2008-07-11 Thread slitz
P-3733 So, in my case i cannot use S3 at all for now because of these 2 problems. Any advice? slitz On Fri, Jul 11, 2008 at 4:31 PM, Lincoln Ritter <[EMAIL PROTECTED]> wrote: > Thanks Tom! > > Your explanation makes things a lot clearer. I think that changing > the '

Re: Namenode Exceptions with S3

2008-07-09 Thread slitz
I'm having the exact same problem, any tip? slitz On Wed, Jul 2, 2008 at 12:34 AM, Lincoln Ritter <[EMAIL PROTECTED]> wrote: > Hello, > > I am trying to use S3 with Hadoop 0.17.0 on EC2. Using this style of > configuration: > > > fs.default.name >

Re: Using S3 Block FileSystem as HDFS replacement

2008-07-01 Thread slitz
nt example but the error were always the same. What should be the problem here? And how may i access the FileSystem with "bin/hadoop fs ..." if the default filesystem isn't the S3? thank you very much :) slitz On Tue, Jul 1, 2008 at 4:43 PM, Chris K Wensel <[EMAIL PROTECTED]>

Using S3 Block FileSystem as HDFS replacement

2008-06-30 Thread slitz
ript right? or am I completely confused? slitz

Re: MultipleOutputFormat example

2008-06-25 Thread slitz
ke to know how to use this kind of things in hadoop as this could help me understand other classes and patterns. So it would be great if someone could give me an example of how to use it. slitz On Wed, Jun 25, 2008 at 7:53 PM, montag <[EMAIL PROTECTED]> wrote: > > Hi, > > Y

MultipleOutputFormat example

2008-06-25 Thread slitz
Could someone please show me a quick example of how to use this class or MultipleOutputFormat subclasses in general? i'm somewhat lost... slitz

Re: Using NFS without HDFS

2008-04-11 Thread slitz
Thank you for the file:/// tip, i was not including it in the paths. I'm running the example with this line -> bin/hadoop jar hadoop-*-examples.jar grep file:///home/slitz/warehouse/input file:///home/slitz/warehouse/output 'dfs[a-z.]+' But i'm getting the same error

Re: Using NFS without HDFS

2008-04-11 Thread slitz
I've read in the archive that it should be possible to use any distributed filesystem since the data is available to all nodes, so it should be possible to use NFS, right? I've also read somewere in the archive that this shoud be possible... slitz On Fri, Apr 11, 2008 at 1:43 P

Using NFS without HDFS

2008-04-11 Thread slitz
nodes can access the NFS shared, and the path to the share is /home/slitz/warehouse in all three. My hadoop-site.xml file were copied over all nodes and looks like this: fs.default.name local The name of the default file system. Either the literal string "local" or a hos

Re: Different output classes from map and reducer

2008-03-05 Thread slitz
Hello, it worked like a charm! thank you :) slitz On Thu, Feb 28, 2008 at 5:51 PM, Johannes Zillmann <[EMAIL PROTECTED]> wrote: > Hi Slitz, > > try > conf.setMapOutputValueClass(Text.class); > conf.setMapOutputKeyClass(Text.class); > conf.setO

Different output classes from map and reducer

2008-02-27 Thread slitz
.hadoop.mapred.MapTask.run(MapTask.java:192) at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1804) i'm just trying to modify slightly the wordcount example to fit my needs but i keep getting this kind of errors. Can somebody please point me the right direction? Thank you slitz