On Tue, Feb 21, 2012 at 07:21:28PM -0500, Stefan Berger wrote:
> On 02/21/2012 06:08 PM, Michael S. Tsirkin wrote:
> >On Tue, Feb 21, 2012 at 05:30:32PM -0500, Stefan Berger wrote:
> >
> >
> >>At the moment there are two backends that need threading: the
> >>libtpms and passthrough backends. Both will require locking of
> >>datastructures that belong to the frontend. Only the null driver
> >>doesn't need a thread and the main thread can call into the backend,
> >>create the response and call via callback into the frontend to
> >>deliver the repsonse. If structures are protected via mutxes then
> >>only the NULL driver (which we don't want anyway) may end up
> >>grabbing mutexes that really aren't necessary while the two other
> >>backends need them. I don't see the mitextes as problematic. The
> >>frontend at least protects its data structures for the callbacks and
> >>other API calls it offers and they simply are thread-safe.
> >>
> >>     Stefan
> >Worst case, you can take a qemu mutex. Is tpm very
> >performance-sensitive to make contention on that
> >lock a problem?
> >
> We have to lock a common data structure 'somehow'. I don't see a way
> around it.

Naturally. But it could be an implementation detail of
the backend.

> The locking times are short since no major computations
> are done while the lock is held.
> Considering that the TPM TIS interface is a non-DMA, byte-by-byte
> send/receive interface, the performance problems, if at all a
> problem, are to be found somewhere else : VMExits for example; if
> interface is used in polling mode, then the interval between polls.
> 
>    Stefan

In that case, you can take the qemu lock in your backend
and avoid locking in the frontend completely.

-- 
MST

Reply via email to