Can you run any map/reduce jobs suchas word count? Raj Sent from my iPad Please excuse the typos.
On Oct 17, 2011, at 5:18 PM, robpd <robpodol...@yahoo.co.uk> wrote: > Hi > > I am new to Mahout and Hadoop. I'm currently trying to get the > SimpleKMeansClustering example from the Maout in Action book to work. I am > running the whole thing from under cygwin from a sh script (in which I > explicitly add the necessary jars to the classpath). > > Unfortunately I get... > > Exception in thread "main" java.io.IOException: Failed to set permissions of > path: file:/tmp/hadoop-Rob/mapred/staging/Rob1823346078/.staging to 0700 > at > org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525) > at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499) > at > org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318) > at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183) > at > org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116) > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:797) > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:791) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Unknown Source) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059) > at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:791) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:465) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:494) > at > org.apache.mahout.clustering.kmeans.KMeansDriver.runIteration(KMeansDriver.java:362) > at > org.apache.mahout.clustering.kmeans.KMeansDriver.buildClustersMR(KMeansDriver.java:310) > at > org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:237) > at > org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:152) > at kmeans.SimpleKMeansClustering.main(Unknown Source) > > This is presumably because, although I am running under cygwin, Windows will > not allow a change of privilege like this? I have done the following to > address the problem, but without any success... > > a) Ensure I started Hadoop prior to running the program (start-all.sh) > > b) Edited the Hadoop conf file hdfs-site.xml to switch off the file > permissions in HDFS.... > > <property> > <name>dfs.permissions</name> > <value>false</value> > </property> > > c) I issues a hadoop fs -chmod +rwx -R /tmp to ensure that everyone was > allowed to write to anything under tmp > > I'd be very grateful of some help here if you have some ideas. Sorry if I am > being green about things. It does seem like there's lots to learn. > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/SimpleKMeansCLustering-Failed-to-set-permissions-of-path-to-0700-tp3429867p3429867.html > Sent from the Hadoop lucene-users mailing list archive at Nabble.com.