Forgot to attach the benchmark:

#include <stdio.h>
#include <windows.h>

BYTE stuff_to_write[4096];
int N = 20; // simulate # of writes per transaction

void test1()
{
HANDLE h = CreateFile("C:\\temp1.txt", GENERIC_WRITE, 0, NULL, 
CREATE_ALWAYS, FILE_FLAG_WRITE_THROUGH, NULL);
DWORD written;
DWORD d = GetTickCount();

for(int j=0; j!=100; j++) {
for(int i=0; i!=N; i++) {
WriteFile(h, stuff_to_write, 4096, &written, NULL);
}
WriteFile(h, stuff_to_write, 4, &written, NULL);
}

d = GetTickCount() - d;
printf("WriteThrough: %dms (%f t/s)\n", d, (double)1000. * 100. / d);

CloseHandle(h);
}

void test2()
{
HANDLE h = CreateFile("C:\\temp2.txt", GENERIC_WRITE, 0, NULL, 
CREATE_ALWAYS, 0, NULL);
DWORD written;
DWORD d = GetTickCount();

for(int j=0; j!=100; j++) {
for(int i=0; i!=N; i++) {
WriteFile(h, stuff_to_write, 4096, &written, NULL); // simulate write of 
data pages to the journal
}
FlushFileBuffers(h);

WriteFile(h, stuff_to_write, 4, &written, NULL); // simulate update of the 
journal header
FlushFileBuffers(h);
}

d = GetTickCount() - d;
printf("FlushBuffer: %dms (%f t/s)\n", d, (double)1000. * 100. / d);

CloseHandle(h);
}


int main(int argc, char* argv[])
{
for(int i=0; i!=4096; i++) stuff_to_write[i] = (BYTE)i;

test1();
test2(); 

return 0;
}


On 5/18/05, Ludvig Strigeus wrote:
> 
> Christian Smith wrote:
> > No, because *every single* write to that handle will involve a sync to the
> > underlying device! That would decimate performance.
> 
> > Using a single FlushFileBuffers batches multiple write's in a single sync
> 
> > operation.
> 
> > That this hurts performance on Windows says more about Windows than about
> > the algorithm used by SQLite.
> 
> There is nothing we can do about the performance of Windows, the only thing 
> we can do is to modify SQLite to suit Windows better.
> 
> In some cases FlushFileBuffers is certainly better, and in some cases not. 
> 
> I made a simple benchmark below that illustrates the two variants. 
> FILE_FLAG_WRITE_THROUGH and FlushFileBuffers().
> The benchmark is made to simulate how sqlite writes to the journal. First it 
> writes the N data pages, then it writes a small data field (such as updating 
> a header), requiring a flush in between and after. The size of a data page 
> was set to 4096 bytes and the loop size is 100.
> 
> 
> N=5 (number of writes):
> WriteThrough: 500ms (200.000000 t/s)
> FlushBuffer:  4703ms (21.263024 t/s)
> 
> N=10:
> WriteThrough: 1203ms (83.125520 t/s)
> FlushBuffer:  5000ms (20.000000 t/s)
> 
> N=20:
> 
> WriteThrough: 1750ms (57.142857 t/s)
> FlushBuffer:  4719ms (21.190930 t/s)
> 
> N=40:
> WriteThrough: 4125ms (24.242424 t/s)
> FlushBuffer:  5140ms (19.455253 t/s)
> 
> N=80:
> WriteThrough: 7156ms (13.974287 t/s)
> 
> FlushBuffer:  5875ms (17.021277 t/s)
> 
> As you can see, the performance of WriteThrough scales approx like 1/#writes 
> while flush buffer remains fairly constant.
> So depending on the value of N, the performance differs greatly. For small 
> transactions that consist of less than 40 data pages,
> 
> FILE_FLAG_WRITE_THROUGH is faster.
> 
> I don't exactly understand those numbers though, when N=5, there are 6 
> distinct calls to WriteFile per loop, and 100 loops iterations. My harddrive 
> is 7200 rpm, meaning that each revolution takes 
> 8.3 milliseconds. If each such write requires a complete platter revolution, 
> the total time would be 600*8.3 = 1666ms, which is considerably more than the 
> 500ms measured by the benchmark. Maybe it manages to sync several times per 
> revolution, if the file is scattered on the disk.
> 
> 
> Another interesting observation is that for FILE_FLAG_WRITE_THROUGH, the hard 
> drive remains almost silent, while with FlushBuffer the drive head seeks 
> constantly. If I remove FILE_FLAG_WRITE_THROUGH, the "WriteThrough" benchmark 
> runs 10x faster, so FILE_FLAG_WRITE_THROUGH indeed has some effect.
> 
> 
> I will try using the WriteFileGather function to write multiple pages 
> simultaneously.
> 
> /Ludvig
> 
>

Reply via email to