(/users/jeastman/...) but that throws
'input path does not exist' errors.
Jeff
-Original Message-
From: Jeff Eastman [mailto:[EMAIL PROTECTED]
Sent: Monday, January 21, 2008 11:15 AM
To: hadoop-user@lucene.apache.org
Subject: RE: Platform reliability with Hadoop
Is it really t
unday, January 20, 2008 11:44 AM
To: hadoop-user@lucene.apache.org
Subject: Re: Platform reliability with Hadoop
you might want to change hadoop.tmp.dir entry alone. since others are
derived out of this, everything should be fine.
i am wondering if hadoop.tmp.dir might be used elsewhere
thanks,
lohi
g
Sent: Sunday, January 20, 2008 11:05:28 AM
Subject: RE: Platform reliability with Hadoop
I am almost operational again but something in my configuration is
still
not quite right. Here's what I did:
- I created a directory /u1/cloud-data on every machine's local disk
- I created a new u
ilto:[EMAIL PROTECTED]
Sent: Wednesday, January 16, 2008 10:04 AM
To: hadoop-user@lucene.apache.org
Subject: Re: Platform reliability with Hadoop
The /tmp default has caught us once or twice too. Now we put the files
elsewhere.
[EMAIL PROTECTED] wrote:
>> The DFS is stored in /tmp o
Thanks, I will try a safer place for the DFS.
Jeff
-Original Message-
From: Jason Venner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 16, 2008 10:04 AM
To: hadoop-user@lucene.apache.org
Subject: Re: Platform reliability with Hadoop
The /tmp default has caught us once or twice too
The /tmp default has caught us once or twice too. Now we put the files
elsewhere.
[EMAIL PROTECTED] wrote:
The DFS is stored in /tmp on each box.
The developers who own the machines occasionally reboot and reprofile them
Wont you lose your blocks after reboot since /tmp gets cleaned up?
>The DFS is stored in /tmp on each box.
> The developers who own the machines occasionally reboot and reprofile them
Wont you lose your blocks after reboot since /tmp gets cleaned up? Could this
be the reason you see data corruption?
Good idea is to configure DFS to be any place other than /tmp