A single server process can be used to track "global" information, but shared memory mapped by unrelated processes might do as well. For example, a common mmap'ed file might serve to track lock info for each process. Of course, such a scheme would have to support handling of crashed processes without burdening the common case. But assuming it did, is this the main obstacle?
Iker Igor Tandetnik wrote: > Iker Arizmendi wrote: >> The question is whether a client-server design is /necessary/ to >> efficiently implement higher concurrency. It appears to be easier >> to do so with a client-server model, but is such a model required? >> Are there functions performed by a server process that cannot be >> carried out at all without it? > > On a high, theoretical level, the advantage of a single server process > is that it has more context. It knows intimate details about everything > going on in the system, and can manage concurrent tasks more efficiently > using this information (e.g. use fine-grained locks). On the other hand, > multiple cooperating processes share only a limited amount of > information; each process knows very little beyond what it itself is > doing. > > Igor Tandetnik > > > > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users -- Iker Arizmendi AT&T Labs - Research Speech and Image Processing Lab e: i...@research.att.com w: http://research.att.com p: 973-360-8516 _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users