Even if it's not interceptable, we should put in a ticket to hdfs to see
about improving either the message itself or the catchability of it.
On Thu, Jan 22, 2015 at 11:09 AM, Josh Elser wrote:
> I know I've run into it before (hence why I brought it up) -- I also don't
> remember 100% if there'
I know I've run into it before (hence why I brought it up) -- I also
don't remember 100% if there's another "common" reason for having
non-zero datanodes participating with none excluded.
I'm also not sure how this manifests itself in code, but, assuming it's
something identifiable, we could t
Has this error come up before? Is there room for us to intercept that stack
trace and provide a "check that HDFS has space left" message? This might be
especially relevant after we;ve removed the hadoop info box on the monitor.
On Thu, Jan 22, 2015 at 8:30 AM, Josh Elser wrote:
> How much free s
How much free space do you still have in HDFS? If hdfs doesn't have enough
free space to make the file, I believe you'll see the car that you have
outlined. The way we create the file will also end up requiring at least
one GB with the default configuration.
Also make sure to take into account any