Excerpts from Olaf van der Spek's message of 2011-01-07 10:08:24 -0500: > On Fri, Jan 7, 2011 at 4:05 PM, Chris Mason <chris.ma...@oracle.com> wrote: > >> > The problem is the write() // 0+ times. The kernel has no idea what > >> > new result you want the file to contain because the application isn't > >> > telling us. > >> > >> Isn't it safe for the kernel to wait until the first write or close > >> before writing anything to disk? > > > > I'm afraid not. Picture an application that opens a thousand files and > > writes 1MB to each of them, and then didn't close any. If we waited > > until close, you'd have 1GB of memory pinned or staged somehow. > > That's not what I asked. ;) > I asked to wait until the first write (or close). That way, you don't > get unintentional empty files. > One step further, you don't have to keep the data in memory, you're > free to write them to disk. You just wouldn't update the meta-data > (yet).
Sorry ;) Picture an application that truncates 1024 files without closing any of them. Basically any operation that includes the kernel waiting for applications because they promise to do something soon is a denial of service attack, or a really easy way to run out of memory on the box. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html