Sriram, Sailfish depends on append. I just noticed the HDFS disabled
append. How does one use this with Hadoop?
On Wed, May 9, 2012 at 9:00 AM, Otis Gospodnetic wrote:
> Hi Sriram,
>
> >> The I-file concept could possibly be implemented here in a fairly self
> contained way. One
> >> could ev
On Thu, Nov 3, 2011 at 4:27 AM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:
> Yes, i remember this issue filed by Harsh recently.
> GlobStatus will sort the results and return. May be we can fix for
> listStatus in the same way.
>
Not a good idea to sort needlessly. That's why we h
tarts getting full. ext4 tends to have problems
with multiple streams (it seeks too much), and ext3 has a fragmentation
problem.
(MapR's disk layout is even better compared to XFS ... couldn't resist)
On Mon, Oct 10, 2011 at 3:48 AM, Steve Loughran wrote:
> On 09/10/11 07:01, M. C. Sri
Are there any performance benchmarks available for Ceph? (with Hadoop,
without, both?)
On Thu, Aug 25, 2011 at 11:44 AM, Alex Nelson wrote:
> Hi George,
>
> UC Santa Cruz contributed a ;login: article describing replacing HDFS with
> Ceph. (I was one of the authors.) One of the key architectur
By default, Linux file systems use a 4K block size. Block size of 4K means
all I/O happens 4K at a time. Any *updates* to data smaller than 4K will
result in a read-modify-write cycle on disk, ie, if a file was extended from
1K to 2K, the fs will read in the 4K, memcpy the region from 1K-2K into th
do you know where?
On Mon, Jan 25, 2010 at 3:51 AM, Ravi wrote:
> End of Feb..I think
>
> On 1/25/10, yuhen...@tce.edu wrote:
> > hi,
> > When is Hadoop Summit in india by 2010??
> >
> >
> > -
> > This email was sent using TCEMail Service.
> > Thiagarajar