Yesterday, I bounced my DFS cluster. We realized that "ulimit –u" was, in
extreme cases, preventing the name node from creating threads. This had only
started occurring within the last day or so. When I brought the name node back
up, it had essentially been rolled back by one week, and I lost
e same file?
Regards
Bertrand
On Tue, Sep 25, 2012 at 6:28 PM, Peter Sheridan
mailto:psheri...@millennialmedia.com>> wrote:
Hi all.
We're using Hadoop 1.0.3. We need to pick up a set of large (4+GB) files when
they've finished being written to HDFS by a different proces
Hi all.
We're using Hadoop 1.0.3. We need to pick up a set of large (4+GB) files when
they've finished being written to HDFS by a different process. There doesn't
appear to be an API specifically for this. We had discovered through
experimentation that the FileSystem.append() method can be u