On 03/26/2013 09:22 PM, Minchan Kim wrote: > Swap subsystem does lazy swap slot free with expecting the page > would be swapped out again so we can't avoid unnecessary write. > > But the problem in in-memory swap is that it consumes memory space > until vm_swap_full(ie, used half of all of swap device) condition > meet. It could be bad if we use multiple swap device, small in-memory swap > and big storage swap or in-memory swap alone. > > This patch changes vm_swap_full logic slightly so it could free > swap slot early if the backed device is really fast.
Great idea! > For it, I used SWP_SOLIDSTATE but It might be controversial. The comment for SWP_SOLIDSTATE is that "blkdev seeks are cheap". Just because seeks are cheap doesn't mean the read itself is also cheap. For example, QUEUE_FLAG_NONROT is set for mmc devices, but some of them can be pretty slow. > So let's add Ccing Shaohua and Hugh. > If it's a problem for SSD, I'd like to create new type SWP_INMEMORY > or something for z* family. Afaict, setting SWP_SOLIDSTATE depends on characteristics of the underlying block device (i.e. blk_queue_nonrot()). zram is a block device but zcache and zswap are not. Any idea by what criteria SWP_INMEMORY would be set? Also, frontswap backends (zcache and zswap) are a caching layer on top of the real swap device, which might actually be rotating media. So you have the issue of to different characteristics, in-memory caching on top of rotation media, present in a single swap device. Thanks, Seth -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/