I went ahead and created a JIRA HADOOP-5552, created a unit test that
demonstrates this bug and a first version of a patch. I suspect that the
patch needs some more work. If somebody wants to extend this patch to make
the unit test pass, that would be awesome.
thanks,
dhruba
http://issues.apache.
On Mar 16, 2009, at 11:03 AM, Owen O'Malley wrote:
On Mar 16, 2009, at 4:29 AM, Steve Loughran wrote:
I spoke with someone from the local university on their High Energy
Physics problems last week -their single event files are about 2GB,
so that's the only sensible block size to use when s
On Mon, Mar 16, 2009 at 9:36 AM, Steve Loughran wrote:
> Owen O'Malley wrote:
>
>> On Mar 16, 2009, at 4:29 AM, Steve Loughran wrote:
>>
>> I spoke with someone from the local university on their High Energy
>>> Physics problems last week -their single event files are about 2GB, so
>>> that's th
Owen O'Malley wrote:
On Mar 16, 2009, at 4:29 AM, Steve Loughran wrote:
I spoke with someone from the local university on their High Energy
Physics problems last week -their single event files are about 2GB, so
that's the only sensible block size to use when scheduling work. He'll
be at Apach
On Mar 16, 2009, at 4:29 AM, Steve Loughran wrote:
I spoke with someone from the local university on their High Energy
Physics problems last week -their single event files are about 2GB,
so that's the only sensible block size to use when scheduling work.
He'll be at ApacheCon next week, to
Since it is per a file, you'd need to check at file create too.
-- Owen
On Mar 16, 2009, at 4:29, Steve Loughran wrote:
Steve Loughran wrote:
Owen O'Malley wrote:
I seem to remember someone saying that blocks over 2^31 don't
work. I don't know if there is a jira already.
Looking at the sta
Steve Loughran wrote:
Owen O'Malley wrote:
I seem to remember someone saying that blocks over 2^31 don't work. I
don't know if there is a jira already.
Looking at the stack trace, int is being used everywhere, which implies
an upper limit of (2^31)-1, for blocks. Easy to fix, though it may
c
Owen O'Malley wrote:
I seem to remember someone saying that blocks over 2^31 don't work. I
don't know if there is a jira already.
Looking at the stack trace, int is being used everywhere, which implies
an upper limit of (2^31)-1, for blocks. Easy to fix, though it may
change APIs, and then th
I seem to remember someone saying that blocks over 2^31 don't work. I
don't know if there is a jira already.
-- Owen
On Mar 14, 2009, at 20:28, Raghu Angadi wrote:
I haven't looked much into this but most likely this is a bug. I am
pretty sure large block size is not handled correctly.
I haven't looked much into this but most likely this is a bug. I am
pretty sure large block size is not handled correctly.
A fix might be pretty straight fwd. I suggest you to file a jira and
preferably give any justification for large block sizes. I don't think
there is any reason to limit
hi there,
i tried "-put" then "-cat" for a 1.6 gb file and it worked fine, but
when trying it on a 16.4 gb file ("bigfile.dat"), i get the following
errors (see below). i got this failure both times i tried it, each
with a fresh install of single-node 0.19.1. also, i set block size to
32 gb with l
11 matches
Mail list logo