Hi Daniel, We had a similar problem earlier. In our case, this problem was caused because the slaves couldn't resolve the ip address of the master.
Can you try an nslookup for my.machine.com on the slaves to see if it works? If not, you'll have to make sure your dns server can resolve the ip correctly. -vishal. -----Original Message----- From: DANIEL CLARK [mailto:[EMAIL PROTECTED] Sent: Thursday, June 28, 2007 12:47 AM To: [EMAIL PROTECTED] Subject: hadoop-site.xml Help I entered the following in hadoop-site.xml and am getting 'connection refused' stacktrace at Linux command line. What could cause this? <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>MY.MACHINE.com:9000</value> <description> The name of the default file system. Either the literal string "local" or a host:port for NDFS. </description> </property> <property> <name>mapred.job.tracker</name> <value>my.machine.com:9001</value> <description> The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> <property> <name>mapred.map.tasks</name> <value>1</value> <description> define mapred.map tasks to be number of slave hosts </description> </property> <property> <name>mapred.reduce.tasks</name> <value>1</value> <description> define mapred.reduce tasks to be number of slave hosts </description> </property> <property> <name>dfs.name.dir</name> <value>/opt/nutch/filesystem/name</value> </property> <property> <name>dfs.data.dir</name> <value>/opt/nutch/filesystem/data</value> </property> <property> <name>mapred.system.dir</name> <value>/opt/nutch/filesystem/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/opt/nutch/filesystem/mapreduce/local</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> Exception in thread "main" java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:519) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:149) at org.apache.hadoop.ipc.Client.getConnection(Client.java:531) at org.apache.hadoop.ipc.Client.call(Client.java:458) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163) at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:247) at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:105) at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.initial ize(DistributedFileSystem.java:67) at org.apache.hadoop.fs.FilterFileSystem.initialize(FilterFileSystem.java:57) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:160) at org.apache.hadoop.fs.FileSystem.getNamed(FileSystem.java:119) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:91) at org.apache.nutch.crawl.Crawl.main(Crawl.java:83) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Daniel Clark, President DAC Systems, Inc. 5209 Nanticoke Court Centreville, VA 20120 Cell - (703) 403-0340 Email - [EMAIL PROTECTED] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ Nutch-general mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/nutch-general
