On Thu, 26 Apr 2007, Nick Piggin wrote: > > But what do you mean with it? A block is no longer a contiguous section of > > memory. So you have redefined the term. > > I don't understand what you mean at all. A block has always been a > contiguous area of disk.
You want to change the block layer to support larger blocksize than PAGE_SIZE right? So you need to segment that larger block into pieces. > > And you dont care about Mel's work on that level? > > I actually don't like it too much because it can't provide a robust > solution. What do you do on systems with small memories, or those that > eventually do get fragmented? You could f.e. switch off defragmentation and the large block support? > Actually, I don't know why people are so excited about being able to > use higher order allocations (I would rather be more excited about > never having to use them). But for those few places that really need > it, I'd rather see them use a virtually mapped kernel with proper > defragmentation rather than putting hacks all through the core code. Ahh. I knew we were going this way.... Now we have virtual contiguous vs. physical discontiguous.... Yuck hackidihack. > > No this has been tried before and does not work. Why should we loose the > > capability to work with 4k pages just because there is some data that has to > > be thrown around in quantity? I'd like to have flexibility here. > > Is that a big problem? Really? You use 16K pages on your IPF systems, > don't you? Yes but the processor supports 4k also. I'd rather have a choice. 16k is a choice for performance given the current kernel limitations hat wastes lots of memory. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/