Hi Richard,

Richard Hipp <d...@sqlite.org> writes:

> In shared-cache mode, the page cache is shared across threads. That means
> that each thread must acquire a mutex on the page cache in order to read
> it. Which means that access to the page cache is serialized.

I just ran our concurrency test in different configurations and I
observer a similar behavior. That is, in the shared-cache mode,
read-only transactions on the same table are executed pretty much
sequentially.

Also, your explanation doesn't feel quite satisfactory to me. In his
original email, Eric mentioned that his table contains just 50 rows.
Surely all this data would be loaded into the cache the first time
it is requested and then accessed concurrently by all the threads.
The only way I can see how the sequential performance could be
explained here is if the cache mutex did not distinguish between
readers and writers (which would seem to be a fairly natural thing
to do).

In our test, on the other hand, each thread queries its own set of
rows from the table. So, based on your explanation, here each thread
should end up with its own set of pages (more or less). However, even
in this case, I still observe a near sequential performance.

Any idea what else might be going on here?

Boris
-- 
Boris Kolpackov, Code Synthesis        http://codesynthesis.com/~boris/blog
Compiler-based ORM system for C++      http://codesynthesis.com/products/odb
Open-source XML data binding for C++   http://codesynthesis.com/products/xsd
XML data binding for embedded systems  http://codesynthesis.com/products/xsde

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to