On Thu, Jul 2, 2009 at 3:23 PM, Philip Zeyliger<[email protected]> wrote: > That page is currently out of date. I believe the correct lists are > {common,hdfs,mapred}-{user,dev,[email protected] > > Yes, that's 9 lists. > > There's also general@, and lists for the subprojects (Avro, Hive, Pig, > HBase, Zookeeper, ...)
Then in this case there must be two updates...: 1. Anyone please update mailing list page to let stop newcomers cross-post stuff around. 2. Wiki: I can confirm 0.20 works fine on OpenSolaris zones in a true cluster. :) ...and one question from newbie (me): By default Hadoop stores stuff in /tmp/.... directory that is "a base for other temporary directories.", however among that "other temporary" stuff I see my actual fsimage and VERSION file. Are they temporary? See, usually /tmp is just 1GB space on a separated partition and is volatile (e.g. system or node reboot simply wipes it out). Does that mean that Hadoop moves actual data blocks elsewhere or actually stores all the stuff actually in /tmp directory? For some reason, if I set like: <property> <name>hadoop.tmp.dir</name> <value>/export/storage/hadoop-data</value> </property> ...where I have lots of space per a node, somewhat Hadoop says capacity 0Kb. :-) When I put anything, it crashes with "File 10mb-random.dat could only be replicated to 0 nodes, instead of 1". 09/07/02 16:33:17 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null 09/07/02 16:33:17 WARN hdfs.DFSClient: Could not get block locations. Source file "/10mb-random.dat" - Aborting... After I configure it back to "/tmp/anywhere-I-want", everything OK. Why it is like this? And how to make sure my data is not in a temporary place? -- Kind regards, BM Things, that are stupid at the beginning, rarely ends up wisely.
