On Thu, 26 Apr 2007, Nick Piggin wrote:
>> OK, I would like to see them. And also discussions of things like why
>> we shouldn't increase PAGE_SIZE instead.

On Thu, Apr 26, 2007 at 12:34:50AM -0700, Christoph Lameter wrote:
> Because 4k is a good page size that is bound to the binary format? Frankly 
> there is no point in having my text files in large page sizes. However, 
> when I read a dvd then I may want to transfer 64k chunks or when use my 
> flash drive I may want to transfer 128k chunks. And yes if a scientific 
> application needs to do data dump then it should be able to use very high 
> page sizes (megabytes, gigabytes) to be able to continue its work while 
> the huge dumps runs at full I/O speed ...

It's possible to divorce PAGE_SIZE from the binary formats, though I
found it difficult to keep up with the update treadmill. Maybe it's
like hch says and I just needed to find more and better API cleanups.
I've only not tried to resurrect it because it's too much for me to do
on my own. I essentially collapsed under the weight of it and my 2.5.x
codebase ended up worse than Katrina as a disaster, which I don't want
to repeat and think collaborators or a different project lead from
myself are needed to avoid that happening again.

It's unclear how much the situation has changed since 32-bit workload
feasibility issues have been relegated to ignorable or deliberate
"f**k 32-bit" status. The effect is doubtless to make it easier, though
to what degree I'm not sure.

Anyway, if that's being kicked around as an alternative, it could be
said that I have some insight into the issues surrounding it.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to